AI at Work: Opportunities and Challenges for Knowledge Workers

Artificial intelligence is no longer a futuristic concept, it’s here, embedded in the daily lives of knowledge workers. Indeed I know of no one who I talk to – colleagues, family, friends and clients have not leant on ChatGPT regularly. Large language models (LLMs), just like ChatGPT, Claude, or Gemini, are transforming how tasks are approached, decisions are made, and ideas are generated. For consultants, analysts, researchers, marketers, lawyers, accountants and other professionals – just like myself, whose work is built on information, interpretation, and communication, these tools present both enormous opportunities and significant risks.


The Benefits

1. Speed and Efficiency

LLMs can draft documents, summarise research, or generate creative options in a fraction of the time it would take a human alone. What once required hours of searching, synthesising, or formatting can now be achieved in minutes or even seconds, giving knowledge workers more time to focus on higher-order thinking and decision-making. A large-scale study of over 200 professionals confirmed this shift, showing that workers are already using LLMs to streamline coding, refine and improve written text, brainstorm new ideas, and seek quick guidance on unfamiliar topics (Brachman et al., 2025). They also rely on these tools for generating documentation and preparing reports, making routine yet essential tasks faster and less cognitively demanding. Looking ahead, participants expressed a strong desire to expand their use of LLMs, particularly for extracting insights from their own data and automating repetitive processes. This indicates that workers are not just using AI for efficiency, but also envisioning it as a partner in more sophisticated, insight-driven aspects of their roles.


2. Expanding Access to Knowledge

With their ability to scan across vast domains, LLMs provide rapid exposure to new ideas, practices, and knowledge sources that might otherwise remain inaccessible. They can synthesise information from multiple disciplines, technology, management, psychology, law, or science, and present it in a way that is both digestible and adaptable to the task at hand. This cross-pollination of insights helps break down organisational and disciplinary silos, enabling professionals to approach challenges with a richer set of tools and perspectives. By surfacing connections that humans might overlook due to time constraints or limited expertise, LLMs can broaden the scope of problem-solving, spark innovation, and encourage more collaborative and integrative thinking across teams and functions.


3. Enhancing Creativity

By automating routine analysis, LLMs can help overcome “blank page” paralysis; the daunting challenge of starting from scratch. Instead of staring at an empty screen, knowledge workers can use AI to generate a scaffold, an initial draft, outline, or set of options that act as a springboard for further thinking. This reduces cognitive load and saves time on lower-value tasks, allowing employees to direct their energy toward higher-order activities such as strategy, innovation, and nuanced problem-solving. In this way, LLMs don’t replace human creativity but act as catalysts, accelerating the move from idea to execution and enabling professionals to focus on contributions that require judgment, originality, and vision.


4. Personalised Learning Partner

LLMs can act as on-demand tutors, explainers, and sounding boards helping professionals upskill quickly in unfamiliar areas without the delay of formal training. Instead of searching through dense manuals or waiting for a colleague’s input, workers can pose questions in natural language and receive tailored explanations, examples, or step-by-step guidance. This just-in-time learning supports continuous professional development, lowers barriers to exploring new fields, and encourages experimentation. For complex topics, LLMs can adjust the depth of explanation, from offering a simple overview to providing technical detail making them adaptable to different levels of expertise. By offering a safe, judgment-free space to test ideas and clarify misunderstandings, LLMs empower workers to build competence and confidence more quickly, which in turn strengthens both individual performance and organisational agility.


The Issues and Risks

1. Accuracy and Reliability

LLMs are designed to generate fluent, human-like responses, but fluency is not the same as truth. They can produce information that is factually wrong, outdated, or subtly biased yet it often appears highly convincing. This phenomenon, sometimes called “hallucination,” means that professionals risk basing decisions on unreliable outputs if they don’t verify results against credible sources. In high-stakes contexts such as legal advice, medical interpretation, or financial analysis the cost of error could be severe, ranging from reputational harm to legal liability. The key challenge is not whether LLMs can provide information, but whether knowledge workers apply the critical thinking needed to check, validate, and refine it.


2. Erosion of Core Skills

While LLMs can speed up routine writing and analysis, there is a danger in over-reliance: employees may have fewer opportunities to practice the very skills that underpin their expertise. Writing persuasively, solving complex problems, or synthesising diverse sources all require repeated, deliberate practice. If AI does too much of the heavy lifting, professionals may lose their edge in these areas over time. Just as calculators reshaped the need for mental arithmetic, LLMs could reshape expectations for communication and reasoning skills. The challenge is to use AI as a scaffold for learning and productivity, not as a crutch that weakens capability.


3. Autonomy and Control

A central part of motivation at work is having a sense of ownership and control over one’s tasks. When AI tools begin to suggest, nudge, or even automate decisions, workers may feel their autonomy shrinking. Instead of choosing how to approach a problem, they may be left to simply follow machine recommendations. Research grounded in Self-Determination Theory shows that when people perceive technology as a hindrance rather than a resource, it undermines psychological needs and diminishes well-being (Sadeghi, 2024). This highlights the importance of designing AI systems that support, rather than dictate, human decision-making—keeping people in the driver’s seat.


4. Confidentiality and Ethics

Because most LLMs operate on cloud-based infrastructure, any information entered may be stored, logged, or used to further train the model. For organisations dealing with sensitive client, financial, or personal data, this creates significant privacy risks. A misplaced copy-paste of confidential material could result in a data breach or compliance violation. Beyond data security, ethical issues also surface: what if the AI reproduces biased stereotypes, misuses intellectual property, or obscures accountability for errors? To address these challenges, organisations must set clear policies for responsible AI use, emphasising data governance, transparency, and ethical safeguards.


5. Inequality and Access

The benefits of AI are not evenly distributed. Large, well-resourced organisations can invest in premium AI tools, customised integrations, and staff training, giving their employees a powerful advantage. Smaller organisations, or those in regions with limited digital infrastructure, may struggle to access the same level of capability. At the individual level, workers with higher digital literacy will find it easier to adopt and adapt, while others may be left behind. This creates the risk of widening inequalities within and between workplaces. Ensuring equitable access to training, resources, and support will be critical to prevent AI from deepening divides in the knowledge economy.


Evidence-based overview

Recent research confirms these patterns. Large-scale studies show that knowledge workers are already using LLMs to draft documents, generate ideas, and accelerate writing, with strong interest in expanding these uses further (Brachman et al., 2025). Evidence also suggests that AI can enhance productivity and well-being when paired with effective knowledge-sharing practices (Abdullah et al., 2023). At the same time, concerns about fairness, trust, and job security are prominent (Sadeghi, 2024), and frameworks for analysing AI-mediated knowledge access highlight risks of shifting power and value in unintended ways (Gausen et al., 2023). From a broader perspective, AI’s impact on work appears two-sided: it can serve as a resource that enhances motivation and performance, or as a demand that increases stress and erodes well-being (Morgan et al., 2019).

Finding the Balance

The future of work with LLMs will not be a question of AI or humans, but AI with humans. For knowledge workers, the challenge is to use these tools in ways that enhance autonomy, competence, and connectedness – the three fundamental psychological needs ( to support well- being not diminish them.

Organisations should:

  • Provide training so employees understand both the strengths and limitations of LLMs.
  • Establish clear policies around data use and ethics.
  • Encourage a culture of experimentation balanced with critical thinking.
  • Design workflows where AI supports but does not replace human judgment.

Summing Up

Large language models are redefining the landscape for knowledge work. Used wisely, they can increase productivity, creativity, and access to knowledge. Used carelessly, they risk undermining trust, skill development, and well-being. The opportunity lies in recognising both the promise and the pitfalls and equipping knowledge workers to thrive alongside AI.

Further Reading
  • Abdullah, N. H., Abbas, Z., & Arshad, M. (2023). Analyzing the impact of artificial intelligence on employee productivity: The mediating effect of knowledge sharing and well-being. Asia Pacific Journal of Human Resources. Advance online publication. https://doi.org/10.1111/1744-7941.12345
  • Brachman, A., Dubiel, M., & El-Ashry, A. (2025). Current and future use of large language models for knowledge work. Technological Forecasting and Social Change, 210, 122682. https://doi.org/10.1016/j.techfore.2024.122682
  • Cai, H., He, J., Yang, J., & Zhang, Y. (2025). Practices, opportunities and challenges in the fusion of knowledge graphs and large language models. Frontiers in Computer Science, 7, 1590632. https://doi.org/10.3389/fcomp.2025.1590632
  • Gausen, M., Mitra, A., & Lindley, J. (2023). A framework for exploring the consequences of AI-mediated enterprise knowledge access and identifying risks to workers. arXiv Preprint. https://arxiv.org/abs/2312.10076
  • Morgan, J., Manyika, J., & Chui, M. (2019). Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences, 116(14), 6531-6539. https://doi.org/10.1073/pnas.1901744116
  • Sadeghi, A. (2024). Employee well-being in the age of AI: Perceptions, concerns, behaviors, and outcomes. arXiv Preprint. https://arxiv.org/abs/2412.04796
  • Shahzad, M., Khan, M., & Ullah, S. (2025). A comprehensive review of large language models: Issues and solutions in learning environments. Discover Sustainability, 6, 815. https://doi.org/10.1007/s43621-025-00815-8

Leave a Reply

Your email address will not be published. Required fields are marked *