Continuing a good tradition, let me recap the whole of 2025 for flespi and announce the direction of our development in 2026.
We do not have the data for December yet, but based on 11 months of this year, the platform showed very sustainable growth in all directions.
Talking about the registered devices, we already grew by more than 350K of them, reaching the average growth of almost 32K of new device registrations per month. The device registrations growth rate is 30% up from the previous year, and I’m pretty sure we are already at the very top of the fastest-growing telematics platforms worldwide, especially if we talk about organic growth and not via acquisitions.
From last year’s roadmap, we:
Analyzed, tried, and picked alternative partner solutions for direct registrations with OEMs. The situation may and should change later, but at the moment, our decision is based on the current realities.
Postponed WebRTC streaming in video telematics to next year.
Finalized tacho functionality for all announced manufacturers and did much more on this. Still a large field for improvements, but we already see solutions built on flespi specifically for tacho cards management. With large fleets migrating to us, tacho is actually one of our extra growth factors.
Implemented assets subsystem for drivers, trailers, and rides tracking. We have around 6K assets at the moment registered in flespi, mostly for drivers management, and this number is growing.
As usual, we implemented a lot of interesting solutions outside of annual plans, which, to a certain degree, could be even more important for some of our users. Most of them you can find under the changelog tag on our forum, and as you can see, it's mostly all about various protocols.
This year, we also survived a few very important incidents with our infrastructure. Each such incident, like a scar on the cheek, gives us painful but the most valuable experience and actually makes us stronger after all. This time, we even survived a case where the entire data center lost power in a single moment. All in all, nothing worked for only 10 minutes. Given the scale of this incident and comparing it to this year's downtimes in AWS, GCP, Azure, and Cloudflare with their multi-hour durations, it seems we as a team did our best to mitigate it so quickly.
The Telematics & Connected Mobility Conference, organized by Gurtam, with all kinds of hardware, software and connectivity products presented, was a large milestone for us. By organizing such an event and inviting all market players, often competing with each other and with us, of course, we established a new foundation for an open worldwide telematics community. The next T&CM conference will be in 2027, with dates and location already secured.
And obviously, a lot of effort was invested into genAI. The sublimation of our achievements and how we progressively improved it, you can read on the blog. Since the article's publication, there have been a few updates to Codi to improve its tooling and platform visibility in various aspects. Also, we usually apply frontier models on the next day as they are released. But that is pretty much it. Codi stable handles around 90% of communication with flespi users from month to month, and we now just minimally invest time into it.
Given all the rapid growth that we now have and the traffic volumes we pass through in 2026, our primary focus will be on platform stability and incident tolerance improvement. We still have a lot of areas for improvement.
I think this may take around 50% of our time. Even rolling out an OS upgrade to 200+ servers is something. Even when running in automated mode, it still takes a couple of weeks to prepare and test upgrade scripts and execute them on live production servers.
We will continue the delivery of small but essential features that improve the usability and operational efficiency of flespi. Such small features we usually identify in our weekly meetings and often implement in days or weeks.
To give you an idea about their effect on you and what applications they can open for you, I would like to share a sample of such small features we are working on right now. This is a possibility to set conditions when to try execution, the number of retries, and the priority for queued commands. With the condition, it will be possible to use the same flespi expressions as inside plugins – so execute commands when the device is online inside a specific geofence, when moving/stopping with datetime filtering, and so on. So the feature itself is really narrow, but its effect on your application can be huge, depending on whether you are controlling devices remotely or not. Similarly, we will narrowly enhance existing functionality in other areas, providing our users with super-efficient tools for a large variety of tasks.
The main focus in development for us still lies in advanced telematics solutions. This is everything about tacho, video, CAN, BLE processing, and related flespi modules such as plugins and analytics. We will definitely implement video streaming via WebRTC and look towards extracting data directly from .ddd tacho-files.
In analytics, we have a few interesting features on the list for which we do not have an exact decision, so it is still under question for 2026. I can not promise anything right now, but we are looking towards group reports combining numbers from pre-calculated devices into a single entity, and some mechanisms for simplifying actions from calculated intervals – for example, auto-assigning devices to assets, executing device commands, and so on.
And of course, in 2026, we will continue the genAI track. Right now, the blocking factor is the token charge, because LLMs are expensive, and we can not give our users uncontrolled or, instead, limited tools for genAI applications. That’s why the very first “feature” in 2026 on this track would be the billing system and price lists enhancement with pay-as-you-use genAI charges. Once we have it we will gradually start to roll out the components you can use on a large scale, such as Codi via REST API to remotely analyze and extract information from the account, its simplified documentation-only version, MCP servers for seamless development with flespi using AI coding tools, and so on. Once we are not blocked by the token price, we will be able to deliver pay-as-you-use AI-backed functionality in a large variety of services.
Internally, the next important milestone in genAI for us is the AI protocol engineering tool, whose beta version is already available and being tested heavily by our engineers. This is an internal tool, but once we integrate it into our protocol engineering process, we will see improved performance and quality in protocol integrations. Human engineers are still in the loop and responsible for the quality of work, but the performance and level of integration can drastically improve.
This tool is something we do not plan to make available to our users publicly, but the platform it is based on can serve as a foundation for other process and task-oriented AI systems, such as system and internal logs monitoring, minimal sysadmin operations, and certain other engineering tasks. The difference between chat-based AI platforms is that this platform is process-focused and can silently execute the process, so chat and messaging are just an option, not a requirement. We also consider it to be a possible foundation for the next AI assistant generation with much higher reliability, intelligence, and capabilities. However, this is not determined yet. So all in all, we are working in genAI direction, mostly focused now on internal processes execution and considering how to give our users something they will truly benefit from.
That’s it. It should be a very interesting year for us, and I hope the same for you.
Wish you a Merry Christmas and a Happy New Year!
