

Glad I could help :)
Glad I could help :)
This process is not triggered by any external events.
Every ten minutes, an internal background job activates. Its function is to scan the database for any RawLocationPoints
that haven’t been processed yet. These unprocessed points are then batched into groups of 100, and each batch is sent as a message to be consumed by the stay-detection-queue
. This process naturally adds to the workload of that queue.
However, if no new location data is being ingested, once all RawLocationPoints
have been processed and their respective flags set, the stay-detection-queue
should eventually clear, and the system should return to a idle state. I’m still puzzled as to why this initial queue (stay-detection-queue
) is exhibiting such slow performance for you, as it’s typically one of the faster steps.
Thank you for testing Reitti. 🙏
It depends on two key requirements for Reitti:
If the EXIF data does not contain geolocation information, we currently cannot display those photos because their placement on the map cannot be determined.
Could you please verify in Immich if the expected photo has its location in the metadata? If it is available there, then the issue might lie in how Reitti is parsing that specific data.
That’s good, but I still question why it is so slow. If you receive these timeout exceptions more often, at some point the data will cease to be analyzed.
I just re-tested it with multiple concurrent imports into a clean DB, and the stay-detection-queue
completed in 10 minutes. It’s not normal for it to take that long for you. The component that should take the most time is actually the merge-visit-queue
because this creates a lot of stress for the DB. This test was conducted on my laptop, equipped with an AMD Ryzen™ 7 PRO 8840U and 32GB of RAM.
Thanks for getting back to me. I can look into it. I don’t think it’s connected, but you never know.
The data goes the same way, first to RabbitMQ and then the database. So it shouldn’t matter, it’s just another message or a bunch of them in the queue.
It is actually awesome if you have some old photos with the geodata attached and scim through Reitti and suddenly one of them shows up :)
Hmm, I had hoped you say something like a Raspberry PI :D
But this should be enough to have it processed in a reasonable time. What I do not understand in the moment is, that the filesize should not affect it in any way. When importing it 100 Geopoints are bundled, send to RabbitMQ. From there we retrieve them, do some filtering and save them in the database. Then actually nothing happens anymore until the next processing run is triggered.
But this than works with the PostGis DB and not with the file anymore. So the culprit should be there somewhere. I will try to insert some fake data into mine and see how long it takes if i double my location points.
Thanks for the information. I will try to recreate it locally. In my testing I used a 600MB file and this took maybe 2 hours to process on my server. It is one of these ryzen 7 5825U. Since Reitti tries to do these analysis on multiple cores we start it with 4 to 16 Threads when processing. But the stay detection breaks when doing it that way, so it is locking per user to handle that. If now one of them takes a long time the others will break eventually. They will get resheduled 3 times until rabbitmq gives up.
On what type of system do you run it?
I will add some switches so it is configurable how many threads are opened and add some log statements to print out the duration it took for a single step.
It was not intentional but after bothering not about it because i had other things on my mind i got used to it and now like it the way it is.
But for everyone who is bothered by that. If Reitti reaches 1k stars on Github I will add a switch to use a centered one 😊
Congratulations 😆
To help with that I would need some information:
Thank you for testing 🙂
Oh, i had the idea in mind what i want to create and than it was a matter of a couple of Google queries but in the end one of the LLM suggested a list of different names in foreign languages and reitti somehow sticked 😊
I had a similar setup with Home Assistant in the past so I understand your usecase. For Reitti to detect visits somewhat reliable it needs at least one datapoint of location data a minute. We build location clusters with minimum 5 points in 5 minutes. If HA tracks that often it should work. HA probably tracks more than that.
I could add an integration that Reitti fetches the data from Home Assistant. Do you mind in creating a feature request?
I have no experience with traccar but it seems that it supports live tracking. This is something Reitti does not support. Depends on your usecase, but i think traccar is better suited.
I looked at the docker image i am using in the docker-compose file and this only supports having a single country code. The actual reason can be found here: https://github.com/rtuszik/photon-docker/blob/3b63df49fbc0a77cafcbd6e6be2b8857c12b9143/start-photon.sh#L341C5-L342C7
It is probably possible if you deploy photon on its own and then import the data somehow. But that is to much hassle for me, i think and hope that most of the use case is handled by the current solution. At least for most of the potential users. But I get the point if someone is traveling a lot between countries.
If there is enough demand I could maybe try to create a PR for the Docker image to handle multiple country codes.
I think this is not exposed when running the Docker container. But let me check later when i have time what happens if i put another country in that variable
I would not say compete. They are different in how things are done from my point of view. I want to focus more on the visits we have done in the past to relive some lost memories whereas Dwarich looks more “technical” for me. I have no better words for it, I hope you get my point in what i am trying to achieve with Reitti. So there should be enough room for both 🙂
I also do not have any intentions to offer a hosted version in the foreseeable future or even anytime.
I used that once on a past gig and it wasn’t very pleasant to use. Especially in combination with spring boot. But that is a couple of years ago. Maybe things have changed. I personally would prefer the executable jar from spring boot. With that you do not have to make that many steps to make it work. But thanks for the suggestion :)
Good question, afaik you can not enter multiple countries to Photon. I was hoping it would be possible but everything i saw was it is either one country or the whole world. But maybe you can have a look here: https://github.com/komoot/photon That is the service we are using.
I was thinking about that, but the main problem is that we do not store all the data which comes in.
If we ingest data from an app, I am pretty sure that the quality of the data is actually usable. But for example if we import an Records.json from Google Takeout. The quality of the earlier years is somewhat sketchy. For this we filter out some points like travelling with over 2000 km/h, sudden direction changes etc and they are lost forever. At least for Reitti they are unknown.
The feature would need a lot of explanation why the data we export is not the same we import.That is the reason I did not implemented it even if it would come in handy for testing stuff. Handling GPX files is a pita …
I am glad it worked out for you in importing the first file. I am still puzzled why it took that long.
For the new format, did you have Android or iOS? With the timeline export from Google Maps on iOS, we can not do anything at the moment because there is actually no raw data in it. Only information like you stayed in this timeframe at that point and you traveled between these points. It’s actually a little bit funny that it aggregates to the same data Reitti uses in the end.
If you are on Android, it could also be a bug when importing that file. I only had a small one from one of my accounts to test. If you mind creating a bug report, I will have a look. If you do not want to attach the export file there, feel free to send it to daniel@dedicatedcode.com. I will have a look at it then privately. No problem.
For the overlap in exports, it depends. If the points are the same, meaning they have the same timestamp, then Reitti will discard them. If not, they will be handled like every other new data and will end in recalculating visits and trips around that particular time.