I have no idea what’s going on in the southern hemisphere, but in the north, we now have real spring. After a long and relatively warm winter, within just a few days, the temperature jumped to +15°C, and we now have great sunny weather outdoors. For the flespi team, it means that it is time for a season-opening BBQ party. At such BBQ outdoor parties, in a relaxed atmosphere, we often generate great ideas that fuel us all year round. And in order to have a relaxed atmosphere outdoors during business hours and days, we should allow our AI coworker, codi, to back up our support process. And do this at the highest quality level.
That’s why we were focused on its training the whole winter, the beginning of this spring, and will continue in this AI direction up to new heights. According to our roadmap for 2024, codi should become a super qualified expert in telematics, and in March, we took one step closer to this goal by releasing a third version of our AI platform.
The evolution from the first version backed by simple RAG into the second version took just two months. The principal change in this transition was the diversification of AI operations into a Level 1 Support agent that interacts with users and a group of specialized experts to address specific cases. We installed the second version of the AI platform at the end of February with just two initial experts - one for generic support operations and another for developers, enriched with flespi REST API knowledge and various development know-hows.
The result was very interesting - responses became slower but deeper. However, for experts, the lack of possibility to access the communication history with users and understand what was already suggested and mentioned was the most crucial problem. The L1 support agent was too dummy to allow it to answer users directly, and experts didn’t have any direct feedback from users. So we got higher response quality for some answers, but at the same time, the communication line itself became less consistent. Very similar to the real world with humans and large organizations where the support team leads its own life and the engineering group leads its own. The lack of direct feedback from users directly to a person responsible for product development slowly defocuses the product away from real problems of its users.
That’s why we decided to combine all the experience we’ve gained with NLP and invest in the next version of the AI platform. It was released just a week ago, and from what I see now in our support chats, its answers are amazingly good most of the time. We equipped it with knowledge about devices, plugins, streams, calculators, webhooks, protocols, and their specific parameters, flespi REST API call schemas, and other flespi-related narrow knowledge. We implemented tools for PVM code and expressions generation that the AI can activate when needed. We loaded it with all the knowledge we've accumulated over the years for specific protocols and provided access to device descriptions on our website.
So now Codi is capable of generating PVM code or using any other programming language to perform any flespi-related developer tasks for you upon request, providing you with great diagnostics on items that you have in your account, and offering good consulting services about flespi. And still, it is just the beginning of our AI voyage in the telematics world…
If you ask me if flespi is now focused on AI instead of telematics, I will give you a definitive negative answer. We are fully focused on telematics, but in order to stay ahead of our competitors, we have to integrate AI services into our operations. This is what we are actively doing. The pace of development in AI is much higher with so much attention nowadays, which may overshadow telematics in terms of informational output. However, AI is just an additional service intended to handle routine operations so that we can devote even more time to our core activities in telematics. This is the plan we are currently executing.
By the way, after the support assistant, the second useful AI tool that you may use is the PVM code generator. We developed it at the Gurtam AI Hackathon and have already integrated it into flespi. Now you can activate it via HelpBox chat, but very soon we will create a standalone application to generate PVM code upon request directly in your flespi.io account. And if you do not know what PVM is and why you may need it - PVM is a special language we crafted to efficiently transform device message parameters from one representation and data format into another. We trained AI to write this code for your tasks, eliminating the need for you to learn a new language.
Now, back to telematics. In March, our uptime was 99.9856%, with a single 6-minute downtime related to an overloaded buffered storage subsystem. The full explanation of this incident can be read in our NOC, but in simple words, the latency of messages pulled from channel buffers exceeded the allowed 5 seconds, and our uptime controlling bot network declared a problem. However, except for those who are pulling channel messages via API and possibly, increased latency in streams, which also utilize the same buffered storage subsystem under the hood, nobody should have noticed any issues.
In March, we introduced two significant new features to the platform.
Firstly, we implemented SSO authentication and semi-automatic user management via realms. We thoroughly tested SSO with various providers, including Microsoft, Google, GitLab, OpenID, and Okta, ensuring seamless integration into corporate IT infrastructures with high security standards. For smaller startup-level users, this presents an opportunity to switch to realms for authentication if they have at least two people working with flespi. For those interested, we provide a video overview from our conference detailing security elements in flespi and how to combine them to prevent data loss.
Secondly, we launched the Wialon overspeeding plugin. This plugin, while simple, is incredibly powerful, allowing users to add road speed information to device messages and detect violations. Similar to the Wialon reverse geocoder or Wialon LBS geocoder, this functionality is provided free of charge. However, it's important to note that flespi does not guarantee the stability of third-party providers such as the Wialon Maps service, and any issues with such types of services may impact the processing pipeline for device messages.
We've integrated two new protocols into flespi: Kingsword and Thingsys.
Additionally, we've completed the integration and testing of two MDVR device manufacturers in telematics: Streamax and Howen. We don't have plans to add any extra features for these protocols at the moment, but we will address any undesired behavior should it arise when the device operates outside its protocol specifications. Overall, these two protocols are now ready for production-grade video telematics integrations.
To enhance simplicity and compatibility, we've modified the reporting format for DTC codes across all protocols to be in array format within a single ‘can.dtc' parameter. We recommend using the DTC decoding plugin to inject textual descriptions of corresponding DTC codes into messages.
We're nearing the completion of our commitment to the flespi Grafana plugin. It's now capable of visualizing telematics data in Grafana sourced from device messages, various logs, account statistics, and container messages. You can subscribe to its changelog to track our progress.
In addition to this, we've made two significant internal upgrades.
Firstly, we've upgraded the OS of all servers in our infrastructure to the latest Debian stable version. This process took approximately a week of preparation and four days to apply the upgrade to over a hundred servers in live mode, including pacemaker clusters and gateway routers. Drawing from our experience in previous upgrades, this one went remarkably smoothly and seamlessly in the background.
The second significant internal upgrade involves our analytics system. We moved its internal state data from relational Postgres to MQTT, known as HASD. This architectural change has provided us with a significant enhancement in the functionality available for analytics calculations. The first feature to benefit from this enhancement will be the ability to use device metadata in calculator expressions.
Another pending feature is the geofences as an entity on the platform. Currently, geofences can only be utilized within calculator configurations or plugins. We plan to make geofences a standalone entity within flespi, enabling you to store geometry data and analyze it alongside devices.
That's all for now. Wishing you pleasant weather outdoors and clear thinking indoors until next month!