January 28, 2019
According to the 2016 Devices and Security Workforce Survey of Forrester Data Global Business Technographics® (a prominent business consulting and research company), about 54% of all global employees are interrupted in their daily routine at work, because they can’t get find or get access to information they need to complete their tasks. Because of a poor search experience, they lose a lot of time hunting down that information. Imagine what this costs every company in the long run! The newest search technologies that have a better understanding of what a user is actually looking for could drastically limit time investment and the connected costs with it.
OK, so you came home after an amazing road trip in Australia, and you ended up taking more than 15,000 pictures. You want to impress your friends with your kangaroo selfie. A few years ago, you had to go through a huge folder on your PC to find this one needle in the haystack. Who sits around for hours, giving every picture a descriptive and easily searchable file name, right? Nowadays when you enter the query ‘me with a kangaroo’ in a good search engine, it will find you through face recognition and will identify the kangaroo, having learned from a big amount of data, such as hundreds of kangaroo pictures, how that animal actually looks. And boom! A job that could take half an hour in the past was reduced to just a few seconds.
We can thank cognitive search for this, a technology that uses machine learning and natural language processing (NLP) to understand and organise data, so that textual queries such as ‘me’ and ‘kangaroo’ also yield visual search results like pictures or videos. This is just one simple example, but as you’ll see in this article, the applications of this technology are very broad.
Real-world data is messy. It comes in all different shapes and sizes, from text documents and images to voice recordings and videos.
Many of these files contain a lot more data than a search engine can collect at first hand. How do you link the picture of a kangaroo to the search query ‘kangaroo’? And how do you link a video about Australia that doesn’t mention ‘kangaroo’ in the title with the same query?
It can be a real challenge to extract the necessary information from this data to make it available for search engines. So how do we make these unsearchable data searchable?
Cognitive search technology works in three phases: Data ingestion, Enrichment and Exploration. The first phase mainly consists of gathering all sorts of data from various sources such as websites, intranet systems, mailboxes, cloud drives, databases etc. and extract data from those sources. This can be Microsoft Office documents, PDF documents, JPEG files, AVI video files, MP3 audio files, or anything similar.
In the Enrichment phase, different kinds of artificial intelligence help in analysing all the collected data. Image analysis extracts information about visual content on an image. OCR (optical character recognition) transforms images of handwritten text on a sticky note or an image of printed text into processable data. Named entity recognition and key-phrase extraction can understand the subject of an unstructured text and can assign it to pre-defined categories. Through sentiment analysis captures the emotional tone of a text and saves this as metadata of a document. All these processed data is automatically classified, thanks to a machine learning algorithm that predicts labels to tag the pages with. The labels group the data into subsets of documents that will appear in search results for a particular search query.
During the final Exploration phase, the cognitive search system uses the extra knowledge that resulted from the Enrichment phase to give the user better quality search results. The system has a better understanding of what the user is looking for and provides more useful data to fulfil the user’s search query, whether it is a text, image or voice search.
One of the best-known providers of artificial intelligence technology for cognitive search solutions is Microsoft Cognitive Services. You can integrate it easily into Microsoft Azure Search or other search engine of your choice.
At an internal conference, Making Waves used a similar search technology based on artificial intelligence. A few of our developers created an app that could tell you which drinks were still available in the venue, just by making a picture of the glass or mug. The information from that picture was then extracted and processed by the app, which could then identify the type of drinking vessel. As a result you got an overview of the currently available beverages you could order in that particular glass or mug. This app was developed with Microsoft Azure Cognitive Services (Custom Vision Service) during one of our hackathons.
As you can see, cognitive search technology can be used in tons of applications. At Making Waves we love to brainstorm how these technologies can help our clients to reach their business goals and optimise their processes. The options are endless and so is our imagination!
Co-author & editor: Pieterjan Benoit firstname.lastname@example.org