1 March, 2024

February 2024 change log

Major flespi improvements in February 2024.

February is the shortest month of the year. But in 2024 it was almost as long as all the other months. And the flespi team committed a lot this February.

As usual, I will start from our uptime mark. In February there were zero accidents and we finished the month with 100% uptime.

We published our roadmap for 2024 and summed up the results of the plan and reality of the roadmap for 2023. In short - we had a great year in 2023 and expect 2024 to be even more great. And it looks like we will hit our first million registered devices this year. It's really hard to estimate when exactly this will happen due to a lot of jumping migrations of various fleet management platforms and telematics applications to flespi. Most probably at the end of summer.

In February we had 4 primary development tracks:

  • Core telematics functionality;

  • Video Telematics fine-tuning and enhancement;

  • SSO authorization services;

  • Exploring and developing AI tools.

The most significant and visible result of our core telematics track is the release of a new msg-copy plugin. This is a very special plugin that does not transform the message as most other plugins but re-publishes newly registered messages or part of them to another device. It is mostly intended to combine multiple tracking devices into a single device. For example to combine data that is received via Iridium connection and copy messages to the paired TCP-based device.

For Video Telematics track, we defined two standard commands that we will support for all capable video devices:

  • video_timeline command to request from the device information about video data available for retrieval for a specific date range;

  • video_playback command to start HLS live video stream from the past;

Initially, we supported these two new standard commands for Streamax with other device manufacturers to be available soon.

Our very long story with realms seems on the final approach (or it is just the beginning - who knows?). We released integration with oAuth 2.0 and OpenID Connect identity providers. In short - this is to enable seamless flespi integrations into large companies with their enterprise SSO system being responsible for authorizing users into flespi. Small companies can also utilize this system to perform secure users and tokens management. Later in March, we will publish an article describing how to integrate authorization into flespi via your oAuth 2.0 provider using GitLab as an example.

One small but important flespi.io usability feature is the possibility to use our REST API selectors in the panel's search bar to quickly navigate between your items. Saves your life when you have tens of thousands of items (device for example ;).

And another small but important feature is the quick AI-backed knowledge search as you type your question to our team. It immediately gave us a 20% drop in questions that we receive from our users. Links with answers to your questions as soon as you type it really help! This is a part of codi’s functionality baked by weaviate and its contextual search power described here.

And finally our AI track. Here we had a crazy journey in February. You know, LLM-based prompt development can hardly be compared to standard software development. Logic in the prompt with natural wording vs. straightforward code. Sometimes LLM can understand what you mean and sometimes not. And as with humans it can follow you or decide to do something on its own. Crazy little thing. But interesting and inspiring with the capabilities it brings.

At the end of February part of the flespi team participated in the Gurtam AI Hackathon. Our team decided to solve what initially seemed an unresolvable task - train LLM to generate PVM code. It is an ambitious task for a language that is created, maintained, and actively used by only a small team of engineers. But because PVM code is used to transform messages according to the user logic and the number of such transformations is steadily growing among our users, for us, it is becoming more and more important. We wanted to provide our users with a tool that can generate the code for such transformation based on their wishes or based on their desired development language. In short - we’ve done it. All details you will find next week in our blog.

codi received its dedicated changelog which you can track from now. Among its recent capabilities are:

  • generation of flespi expressions upon request;

  • deep knowledge of flespi REST API calls;

  • assistance to write code on your choice of programming language that utilizes flespi API;

  • understanding of integrated devices and protocols;

  • understanding of the commands and settings available for a particular device.

Feel free to utilize its capabilities to solve your tasks. This is still a very early stage of development and actually, the second version of our AI platform installed last week is totally different compared to the first version. Obviously, I will post more details on our findings and changes we apply in the AI direction later.

And remember - the pace of development in AI tech both globally and locally, inside our product is incredible. Most probably you will be shocked at the amount of tasks and operations it can efficiently handle for you at the end of this year.

And finally, in the very end, I want to say that my work with AI gave me a clear understanding of how cool the human brain is and how much there is in it. AI cannot replace humans now and most probably in the next few years. But people that use AI to increase their productivity, especially in routine tasks will obviously replace people that do not.

Till next month!