London Calling: Around AI Expo in 1 day

With all the recent intrigue surrounding the true origins of Mr. Bean, I am happy to report he joined me in London at the AI Expo couple of weeks ago. I cannot but thank him for helping me navigate the bewildering atmosphere of a technology show, including booths with folks explaining (with varying degrees of passion) why their particular solution will save the London Bridge from falling.

People come to such shows with different purposes: some want to see what others are doing – and speak to other (real) people, instead of simply observing websites and reading white papers. Some use it as a sales blitz opportunity. When going to that particular event, I set a specific goal for myself: to compile a list of solutions I would personally want to use in my own projects in the short- to medium-term.

Thus, here we go. I would not dare to claim these are better than sliced bread, but they can definitely help to solve the real world technical needs in one way or another.

Omron Sensors

We’ve already worked with Omron cameras and industrial computers, but I was very intrigued by that particular cutie.

This thing allows quick, easy, efficient and discreet deployment of environmental information collection sensors at places like hospitals, essentially turning any computer there into an “edge” node. Thanks to Tony Wilmot for a great explanation.

VDoo

I met Luc Vervoort during the lunchtime and he quickly sold me on the concept of catching IoT devices’ security vulnerabilities – and catching them early. VDoo offers real-time device profiling and continuous security updates, so that protection is an ongoing affair. This potentially fits the needs of several critical infrastructure companies we work with, such as IoT enabled water, oil and gas pipelines delivery.

Enigma Pattern

Not sure about you, but we often have trouble training our Computer Vision algorithms over a sufficiently large image data-set to model what we’ll come across in real deployments. You simply cannot have enough face variations and different lighting conditions when working on ID fraud detection, liveness and other KYC tasks. Here Lukasz Kuncewicz and his team made a very compelling technical explanation, accompanied by a no-nonsense sales pitch. Basically, these guys generate so-called synthetic images of very high quality from the original images, thus creating both sufficiently large and sufficiently variable data-sets. I have already shown their ideas to my own clients, let’s see how it works out.

Vemotion

Now, that one is very cool. Imagine you need to receive video-feed from bad-bandwidth and/or bad-latency locations, from several devices simultaneously and so forth. Iain Jamesand his colleagues helpfully described their approach under specific scenarios – for example, collecting video feeds from cameras attached to people walking around a remotely located building. They have solutions for encoders, clients and servers – all seem to be done in very consistent and smart way. Absolutely going to look into their products in more details.

Also

Several more encounters are worth mentioning: Mobotix with decentralized control of intelligent cameras, Minfarm Tech satellite LoRaWan gateways, ProGlove IIoT-enabled bodywear and SenX’s enabler platform for Time Series Data. I feel bad for not being able to cover them here, everyone from that list had something eye-catching – still it is time to stop.

The End

That was indeed a long trip around the AI world, which ended up with (almost a rooftop) networking party. I took in a couple of scotches, was lucky to chat with several more of IoT and Computer Vision enthusiasts – and exhausted but happy retired to my hotel. What Mr. Bean was up to I can’t really say, he has abandoned me much earlier, ostensibly for a genuinely higher purpose – last seen heading off together with Johnny English to save the London Bridge…