

You are right, I missed that they are still open for the other services. I have mine running behind traeffik and did removed all port definitions.
I will change the compose file to only expose Reitti.
Thanks for the feedback🙏
You are right, I missed that they are still open for the other services. I have mine running behind traeffik and did removed all port definitions.
I will change the compose file to only expose Reitti.
Thanks for the feedback🙏
Greetings, @[email protected], that sounds like a truly wonderful idea, and as a fellow cat owner, it brings me great joy to hear about it. 😻
In fact, I have recently changed the analysis of data, which is now performed in near real-time as soon as new data becomes available. I am currently working on the functionality to display multiple users (or, in your case, Pandora) on the map, which should be beneficial to your idea.
Now, the primary question is, how can we integrate the data from the tracker into Reitti. Thats something I have no idea at the moment. Do you have any Infos about that?
This process is not triggered by any external events.
Every ten minutes, an internal background job activates. Its function is to scan the database for any RawLocationPoints
that haven’t been processed yet. These unprocessed points are then batched into groups of 100, and each batch is sent as a message to be consumed by the stay-detection-queue
. This process naturally adds to the workload of that queue.
However, if no new location data is being ingested, once all RawLocationPoints
have been processed and their respective flags set, the stay-detection-queue
should eventually clear, and the system should return to a idle state. I’m still puzzled as to why this initial queue (stay-detection-queue
) is exhibiting such slow performance for you, as it’s typically one of the faster steps.
That’s good, but I still question why it is so slow. If you receive these timeout exceptions more often, at some point the data will cease to be analyzed.
I just re-tested it with multiple concurrent imports into a clean DB, and the stay-detection-queue
completed in 10 minutes. It’s not normal for it to take that long for you. The component that should take the most time is actually the merge-visit-queue
because this creates a lot of stress for the DB. This test was conducted on my laptop, equipped with an AMD Ryzen™ 7 PRO 8840U and 32GB of RAM.
Thanks for getting back to me. I can look into it. I don’t think it’s connected, but you never know.
The data goes the same way, first to RabbitMQ and then the database. So it shouldn’t matter, it’s just another message or a bunch of them in the queue.
Hmm, I had hoped you say something like a Raspberry PI :D
But this should be enough to have it processed in a reasonable time. What I do not understand in the moment is, that the filesize should not affect it in any way. When importing it 100 Geopoints are bundled, send to RabbitMQ. From there we retrieve them, do some filtering and save them in the database. Then actually nothing happens anymore until the next processing run is triggered.
But this than works with the PostGis DB and not with the file anymore. So the culprit should be there somewhere. I will try to insert some fake data into mine and see how long it takes if i double my location points.
Thanks for the information. I will try to recreate it locally. In my testing I used a 600MB file and this took maybe 2 hours to process on my server. It is one of these ryzen 7 5825U. Since Reitti tries to do these analysis on multiple cores we start it with 4 to 16 Threads when processing. But the stay detection breaks when doing it that way, so it is locking per user to handle that. If now one of them takes a long time the others will break eventually. They will get resheduled 3 times until rabbitmq gives up.
On what type of system do you run it?
I will add some switches so it is configurable how many threads are opened and add some log statements to print out the duration it took for a single step.
Congratulations 😆
To help with that I would need some information:
Thank you for testing 🙂
I would not say compete. They are different in how things are done from my point of view. I want to focus more on the visits we have done in the past to relive some lost memories whereas Dwarich looks more “technical” for me. I have no better words for it, I hope you get my point in what i am trying to achieve with Reitti. So there should be enough room for both 🙂
I also do not have any intentions to offer a hosted version in the foreseeable future or even anytime.
Thanks :)
No, did not occur to me. What would the integration look like? Connecting it to the message bus to receive location updates? Honestly it is a couple years ago I played with HA.
I do not think it is that complicated. The front-end sends a request to the back-end with the current selected day. This triggers a search in Immich returning all photos taken on that specific day. This is returned to the front-end and this than does the heavy lifting like filtering them to the current map bounds, displaying them on the map at a specific location. We proxy all request from the front-end through our server because of CORS issues and I did tried to avoid having to configure Immich besides creating a token for the API.
One would need to either create a specific IntegrationService like ImmichIntegrationService and then figure out a way how the user can configure that. The easiest would be that we just then call all available ones even if I do not see the use case of having multiple Photo-Servers. But it would make the code in Reitti cleaner and would not hurt if we do not configure 20 simultaneous servers :D
no, that would not be a problem as soon as the other image library has an api reitti could query. It just happens that I am settled with immich and had no other needs at the moment.
If you need a specific one, drop a feature request and I will have a look.
I have no clue if a raspberry will handle it. There a a couple of services involved to make it fast, but they are then another burden like RabbitMQ. Which make ingesting data instantaneous but you need extra processing power to handle the queues. It all comes with a tradeoff.
For size, there is mainly the PostGIS DB. I just checked and my db is around 800 MB for roughtly 8 1/2 Years of data.
Photon (the reverse geocode enabled in the compose file) is another beast. For Germany it takes 14 GB of storage while running, if you let PARALLELL updates enabled you can double that every time the index is updated. But you can remove that from the compose file and rely on external Geocoders. It is described in https://github.com/dedicatedcode/reitti?tab=readme-ov-file#reverse-geocoding-options
Someone would need to get that data out of the app. Either as GPX or some other format. I do not think these devices are able to send the position to some configurable endpoint. A quick search for nut tracker and home assistant does not yield any reasonable results.
I think we are out of luck at that point. But maybe someone knows about a more open tracker device which could be configured.