What's the matter? Copilot was implemented, but the results do not reflect expectations. After the initial euphoria, usage is stagnating. There can be various reasons for this. First of all, it is important to realize that Copilot can assist with many tasks, but it does not handle them on its own. If employees believe that Copilot can complete all their tasks autonomously and without errors, they are bound to be disappointed. The expectations of the employees must be managed. Users must clearly formulate their requirements from the very beginning, monitor the result development process, and carefully check the outcome. It is crucial to clearly communicate the role of Copilot as a support, not as a driver of tasks.
Focus is on the user
Copilot can partially complete many tasks through prompts or show users how to get those tasks done by themselves. But the decisive factor is the formulation of the prompt. It must be as clear and non-ambiguous as possible. Employees need to learn how to create effective prompts to achieve the desired results. This requires training and, especially at the beginning, a lot of practice, as prompts differ significantly from the well-known search commands on Google or Bing.
A large language model (LLM) like Copilot understands natural language and expects detailed prompts that describe exactly what the user wants. Employees cannot make this transition overnight, even if they receive support in the form of training. The users need time to learn how to accurately phrase their requests.
Data quality and access
What else can be the reason if Copilot does not make friends? For example, the data that the tool can access in the Microsoft 365 tenant. The results will be better, the more up-to-date and relevant the data is. An LLM does not improve data quality, on the contrary: If the tenant is an unmanaged data dump, Copilot will only be able to provide moderately good support. Now failures of the past come back to haunt the present. Tenant hygiene, i.e., the clean-up of old data, solid role-based access control and sensible but strict governance, should have always been part of M365. But many organizations have been sloppy here. Now it is high time to introduce automatic deletion of old data by means of retention policies and to revise access management. Modern ways of working, such as sharing links instead of sending files, also improve data quality.
Guidance and training
Even if an LLM like Copilot tries to understand the instructions as well as possible, at the beginning, many prompts are far from perfect. Training and instruction help, but they must be aligned with the user’s knowledge and experience. Depending on where an employee stands on their AI journey, they need different tips and hints. If the instructions are too generic, the AI will never utilize its potential. If the instructions are
completely missing, functions remain unused. And without clear Do's and Don'ts, inquiries as well as results can be ethically questionable. The best approach for organizations is to plan an effective training concept and name competent AI champions as contact persons. An important part of training is the regular exchange of ideas among colleagues. Prompt examples from one user can inspire the next and lead to higher adoption. In the end, however, the most important thing is that Copilot is given a productive role in the daily work. The tool should be used for everyday tasks and users should be encouraged to include Copilot in each assignment. Soon, users will find out where Copilot offers the greatest assistance and benefit.















