Strategic Intelligence and Generative AI: Balancing innovation and security
In the "Generative Artificial Intelligence" turning point currently being negotiated by players in the business intelligence sector, the right balance needs to be sought, found and maintained between :
- Functional innovation, enrichment of monitoring and analysis experience, productivity gains
- Security risks linked to sensitive information, training biases, risks of error and hallucination, loss of confidence in information
- Impacts on the jobs and positions of watchkeepers, and more generally the associated societal and environmental impacts, past and future.
At EspritsCollaboratifs, we have made several choices and structuring compromises around the integration of generative artificial intelligence into Curebot in order to enrich our strategic intelligence tool while respecting our fundamental values and visions of a responsible, ethical digital world at the service of the development of human and collective intelligence.
Business intelligence in the context of digital sovereignty: our choice of self-hosting
For a business intelligence tool vendor, there are two main ways to integrate generative AI into its platform:
- immediate, relatively simple and inexpensive use of APIs from recognized suppliers
- the development of its own AI infrastructure; a complex project, requiring advanced skills and involving significant financial outlay
Despite the apparent technical and financial advantages of external APIs, self-hosting has always been the obvious choice to guarantee our customers' digital sovereignty and information security, keeping their strategic data away from non-European jurisdictions.
Using generative AI APIs from American vendors (OpenAI, Copilote, Gemini, Claude, Bard...) is fundamentally opposed to the principles of business intelligence and information security. By exploiting these APIs, users of business intelligence platforms run the risk of unwittingly disclosing sensitive information. Requests made by an employee, exposing his or her specific interests, sharing external resources of interest or even internal, potentially confidential documents, asking for support in formulating a point of view, a comment, a paragraph...
.... This exposes these data and centers of interest to the prying eyes of the U.S. justice system, the U.S. government's arm in supporting the competitiveness of U.S. companies.
Just as companies have become aware of the risks associated with the use of American search engines and the importance of hosting their strategic data on European clouds, it is just as crucial for companies, particularly French ones, to ensure that their employees do not expose confidential or regulated information. And therefore, to only choose providers who commit to not relying on American artificial intelligence services and APIs, or whose servers are governed by foreign laws.
Strategic intelligence, AI and Open Source: the choice of autonomy
To consolidate our autonomy and that of our customers, we have opted for a French Open Source model. This approach prevents the risks associated with dependence on third parties, such as the sudden closure of APIs or price increases, events already observed in the industry and which threaten the continuity of monitoring activities.
Strategic intelligence and AI: transparency, user control
Although generative AIs are the perfect example of an algorithmic black box, we will continue to bring a maximum level of transparency and control to the users of our intelligence platform, as we set out to do with the introduction of clustering algorithms for large corpora of resources. Gradually, we'll make customizable parameters available, such as the creativity indicator, the role taken on, or the target language.
In our generative AI roadmap, we plan to mark content generated exclusively by our generative AI and content generated jointly by a user, supported by our Curebot Assistant, in order to make the use of AIs in the business intelligence tool as transparent as possible.
Surveillance and AI: the need for risk awareness
Through the interfaces we offer, we are also committed to making our users aware of the potential errors and limitations of AGI.
As part of our support, our consultants can help you raise employee awareness of the challenges of trust and information security in the age of generative AIs.
We are also convinced that information professionals have a major role to play in raising awareness, through their skills and role as guarantors of reliable information on which the organization can base its decisions; just as raising employee awareness of Cyber risks has become a responsibility of Security departments.
AI-enhanced strategic intelligence: assistance and enhancement of the human role
Our aim is to strengthen the role of business intelligence professionals, not by replacing their expertise, or negating the need to carry out their activities, but on the contrary by facilitating, encouraging and stimulating the work of human analysis. We are convinced that these generative AI contributions can benefit all of a company's employees.
The Curebot Assistant is designed to help at every stage, from synthesizing and formulating ideas, to structuring thought, awakening critical thinking, recommending additional information searches and animating watch and analysis networks.
Towards the full integration of AI in business intelligence
We are convinced that generative AI can provide valuable support at every stage of the business intelligence cycle. Our challenge is to create interfaces and prompts that maximize this assistance, to make the fruits of AI-backed human analysis actionable, and to remain vigilant about the errors, biases and risks involved in using AI.
Interested in integrating generative AI into your intelligence platform? Want to explore how this technology can transform your monitoring practices?
Follow us on LinkedIn