In November, our team was mostly focused on internal work – improving platform stability, expanding data center infrastructure, enhancing human engineering processes with extra AI services, and so on.
On Saturday, November 17, we had 72 seconds of downtime on REST API services overloaded with a specifically formatted request pattern that slipped past our validation layer. Even though it was a day off, we shipped a hot-fix within an hour to prevent similar cases in the future, and on Monday, we rolled out a permanent protection layer to enforce this fix.
As announced in the previous month’s changelog, we restored per-channel media traffic counting. This became possible after refactoring the channel blocking system. Now, when a channel is blocked, we drop only connections from unregistered devices and, whenever possible, block the exact device causing the excess. This made such soft channel blocking almost a non-event for normally operating devices.
We disabled SEPA payment processing due to extremely high latency (up to 14 days) and low payment reliability that triggered quite a number of incidents for us. So if you relied on this method for auto-payments, please switch to a regular wire transfer or simply attach a credit card for stable instant charges.
Finally, one of the former leading families of tacho-oriented devices operating on the BCE protocol received full support for this functionality in flespi. Engineered in Lithuania, previously known as Baltic Car Equipment and now known as Xirgo.
We integrated the new Track-iot protocol into flespi.
One more article you should not miss this November is how to organize multiple flespi accounts loaded with the same data, with some operations possibly limited (e.g., read-only). For example, giving each support team member their own space and chat with flespi. Or creating a dedicated per-project chat with codi. It’s a very short but practical guide based on my talk at the flespi conf, enhanced with our modern capabilities.
Also, we now have videos from the Telematics & Connected Mobility conference available on YouTube. We published our selected shortlist, and if you want to dive into the mix of AI and telematics (mostly AI, of course), you can watch my other talk here.
And finally, about AI. We made great progress on it. Our AI assistant, codi, is doing solid work every day, handling around 90% of support and sales communication. We’re keeping it as is, only swapping models as new ones are released. But now we’re focusing on another task – an AI protocol engineer. This is far more complex and requires advanced models and technology to run.
So far, we’ve had strong results here with a huge amount of time and effort invested, and it’s definitely an area where we’ll keep pushing. Even now, the system is already bringing value to our daily protocol engineering work.
Technically, it works like this. We create a new session with the AI, invite it to work on an issue, and the AI runs all investigations, traces, compares real traffic with the protocol documentation, and decides whether the problem is on the flespi side (implementation) or on the device side (firmware).
If the issue is in our implementation, the AI designs the solution architecture, taking into account all the specifics we have in protocol development. If the architecture is straightforward, it reports it and moves directly to the implementation phase. If the architecture is not that simple or has some caveats, it first aligns the architecture and implementation scope with a human engineer.
Once implementation starts, the AI autonomously updates the code and creates or adjusts existing tests to match the new logic. At some point, it may detect that the chosen architecture needs re-evaluation and jump back to the previous phase.
After a successful implementation, the AI switches to the quality review phase, which often brings additional changes and can also lead back to architecture revision. Finally, the AI reports that the implementation is ready for deployment to a human engineer, which may involve a few more iterations.
When the human engineer deploys the changes and notifies the AI, the AI analyzes the impact. If everything looks fine, the AI enters a monitoring cycle, checking if any user reports an incident related to its work. After a month, the task is considered completed, and the AI stops.
So here we’re trying to enhance human work with capabilities that are really hard to achieve manually – identifying possibly affected users, continuous impact monitoring, and so on. This is not a replacement for a human engineer, of course. It’s about building the right kind of symbiosis where AI takes over the heavy routine, keeps the process consistent, and watches the system far beyond what a person can realistically track.
We’re just at the beginning of this track, and someday I’ll definitely publish a more technical overview of this system with initial results, pitfalls, and expectations. So, keep tuned and Merry Christmas to you! 🎄