So, I have a data with langitude/longitude and radius for geo-fences, I have beacons set up as well. Everything stored in PostgreSQL+ PostGIS. Working in Python.
My problem is: I want to monitor/track as much POI (set up by client) as I can. So for instance take a look at this tracking all the POI
I was thinking about K-means, but it's not the most appropriate algorithm here. The reason is that k-means is designed to minimize variance. This is, of course, appealing from a statistical and signal processing point of view, but my data is not "linear".
We have an app which collects geopoints/GPS and beacon data from persons who have installed the app on their mobile phone. With a triangulation of the beacon and GPS data we know where a person has been and which places they have passed. The goal of this project is to develop an algorithm that helps us to see whether a person has passed a certain place in a certain time, e.g. we would like to know whether a person has seen a poster/ad of a company. We collect GPS data from 2000 persons a couple of times per day (depending on Android or iOS)
We need an algorithm that can handle large amounts of data because we collect more than a million GPS data and 100’000 poster contacts per day. This also means that data collection is permanent. Furthermore, we need to be able to operationalize the algorithm. Result of data processing should be presented with the following information: user id, location of poster, time, mathematical probability of being at the certain place (and in a second step: speed, type of vehicle, direction of ad/poster)
Is it possible to achive something like that in python? if no, what more do I need to do that?python-3.xalgorithmgeolocationpostgisgeofencing