
Is the future of market intelligence being written in AI ink?
The EspritsCo team had the opportunity to attend Documation's conferences this year, particularly those focusing on the challenges of AI in intelligence professions. Here are some key takeaways from these conferences, featuring insights and inspiring feedback.
Generative AIs at the heart of discussions on the future of market intelligence
Intelligence professionals are facing new strategic challenges with the integration of artificial intelligence, which is likely to disrupt current intelligence practices. It brings real added value, particularly with the automation of certain tasks or the generation of summaries of text corpora. However, many challenges remain regarding the quality and reliability of data, information security, and the preservation of confidentiality or secrets of a company that relies, through its tools, on APIs from generative AI providers such as OpenAI, Google, or Microsoft.
Discussions around AI were unsurprisingly at the heart of the roundtable discussions: What are the impacts on monitoring practices? What are the challenges and behaviors associated with the use of AI? What are the prospects and developments? This article aims to provide an overview of the topics covered.
Impact of AI on market intelligence practices
The integration of AI into monitoring practices is a divisive topic, dividing professionals on both its role and its impact. Let's be clear: we are still in an exploratory phase, during which we lack the perspective to have a concrete idea of the changes and impacts it will bring.
At the forefront of expectations is the potential for AI to support watchdogs in building an overview of a given topic. While having little impact on the collection phase, the capabilities that generative AIs enable in terms of qualifying needs, extracting, and interacting with information corpora effectively open up new horizons. Some of these generative AI application cases already exist and allow watchdogs to save time on a given set of tasks.
One of the main questions that arises concerns the reliability of the data: how can we be sure that the data produced by these AIs is relevant? From this first point follows the following: if the data is not reliable, how can professionals, whether they are watchdogs or not, rely on the answers generated in the course of their daily activities? This is how we learned that Google recommended to its own employees not to trust the results of its own generative AI. Indeed, on the one hand, generative AI engines, contrary to what one might imagine, do not think, they simply generate a coherent sequence of words, which earned them the friendly nickname of idiot savant.
The critical thinking of watchers and the need to source, verify, and cross-reference information are becoming even more essential at a time when trust in the information and multimedia content we are exposed to is diminishing as quickly as the content is produced by these generative AIs. Furthermore, the quality of the results is highly dependent on the relevance and accuracy of the prompts used. This is a new skill to add to the expertise and skills of watchers, but more broadly, of all the professionals who will be called upon to use these tools!
Challenges and behaviors associated with the use of AI
The experts present at the roundtables clearly reported that attitudes toward AI vary greatly, ranging from curiosity about its capabilities to fears about its impact on information professions. Those with an interest in AI are curious about the career opportunities that could emerge and believe that AI can stimulate innovation in information professions. Professionals are eager to explore potential applications and develop new skills to remain relevant in a job market that is likely to undergo changes. Conversely, some are apprehensive about the integration of AI, which leads them to adopt a wary attitude toward this technology. There is a fear of being replaced by these new technologies. Communication and training around this topic will be a key issue in the acceptance of these new tools.
Continuing training for watchdogs to understand and master constantly evolving AI tools will be essential. There are strategic issues, particularly around the use of LLMs. Mastering AI will require investing time and resources in training and skills development. The time and efficiency gains, in the long run, must be put into perspective, firstly by the training and acculturation time required to master these tools, and secondly by the time that will need to be allocated to ensure the veracity of the results generated by these tools. Regardless of the tools used, only watchdogs and analysts will be held responsible for the information and decision-making support they disseminate.
Finally, watchdogs will have a role to play in transmitting best practices and preventing risks related to the use of AI in monitoring and analysis activities. While some speakers insisted on the need to develop expertise around AIs, for our part at EspritsCollaboratifs, we are convinced that information skills and literacy must be massively disseminated within organizations and that information professionals are the most legitimate people to support the dissemination and generalization of these skills. The best way to protect against hallucinations will remain controlled sourcing, based on trusted and quality sources, thus allowing for the feedback of trusted information; on which the use of generative AI engines will present less risk.
Monitoring and collaborative analysis to counterbalance the loss of confidence in information?
While the outlook for development is likely to change certain aspects of intelligence professions, it is important to consider the development opportunities and associated challenges. Beyond technical changes with the automation of certain tasks, AI will change the way we access, manipulate, and query information. While trust in information seems destined to diminish in the face of the spread of generative AI use, it is essential to develop an internal capacity for collaborative and distributed analysis.
This is a historical conviction that we have at EspritsCollaboratifs and which pushes us to implement generative AIs within Curebot as a tool for monitoring and analysis.
This analytical capacity can only be distributed and collaborative: in fact, if each of us will eventually be able to mobilize a generative AI to summarize a patent, only a technical expert in the field will be able to truly exercise a critical eye on the text produced or compare the said patent with the actions and projects of the company.
AI should not be seen as a solution that will replace information professions, but rather as a support tool. We must dehumanize AI, which is nothing more than a machine that produces a sequence of characters in response to a prompt. The important thing is to maintain control over data analysis, recommendations and decision-making support, and above all not to consider the tool for anything other than what it is: one more instrument in the technical and methodological toolbox of monitors and analysts. No AI can today replace the expertise, knowledge of the monitor and the trust that an organization places in these monitoring and analysis teams and processes.
AI and monitoring: in search of the right risk-benefit balance
While AI appears to offer real added value to information professionals by automating certain tasks and facilitating data analysis, it nevertheless poses challenges in terms of data reliability and interpretation of results. While intelligence professionals' attitudes towards AI vary, ranging from curiosity to fear, it is crucial to adopt a balanced, cautious, and measured approach to reaping these benefits while managing these risks.
Although rarely addressed during conferences and round tables, ethical, societal and environmental issues remain crucial: whether it be the past and future societal costs of training language models or the present and future environmental costs of each request to a generative AI (carbon impact, water consumption, rare materials needed to produce future generations of graphics cards, accelerated obsolescence of currently used graphics card fleets, etc.). These social and environmental externalities must be taken into account in the development and adoption of these technologies within our organizations. Let's make a small bet here: in a few years, corporate CSR teams will question the internal uses of these generative AIs...
Follow us on LinkedIn