Dr Gitanjaly Chhabra, Assistant Professor, University Canada West and Dr Noor Rizvi, Instructor in the Department of English, Kansas State University, discuss the influence of collective moral imagination on human decisions and how it improves accountability and fairness.
As artificial intelligence (AI) continues to advance, human decision-making is evolving through systems that integrate human judgement with machine input. For these systems to operate efficiently, we need to stay informed and utilise moral imagination, the ability to assess, imagine, and handle complex situations with creativity and empathy.
Creating a collective moral imagination between humans and machines is crucial today. By combining human empathy and contextual understanding with AI’s capabilities, organisations can make more inclusive, responsible, innovative, and creative decisions.
In sectors such as healthcare, autonomous vehicles, and finance, this approach can yield results that are both effective and ethically sound.
A collective moral imagination should encompass humans and machines working collaboratively to reimagine ethical solutions, foster open dialogues, and promote a shared moral responsibility.
Decision-making domains
Healthcare sector
Currently, in the healthcare sector, the relationship between doctors and patients is often strained; however, with AI integration, a place for real healing and connection can happen. AI’s capacity to personalise care, optimise resources, and support mental and physical well-being shows not only a technical advancement but a moral reorientation toward more responsive, context-aware, and humane healthcare practices.
According to a World Economic Forum 2025 report, an AI software twin, trained on 800 brain scans of stroke patients and trialled on 2,000 patients, showcased very impressive results. AI is also able to spot more broken bones on X-rays than humans alone. Researchers suggest that integrating AI-generated insights with human expertise can quicken both diagnostic processes and the development of efficient treatments.
Financial sector
Similarly, AI can personalise financial planning and accelerate risk assessment and management. According to PwC, assets managed by robo-advisors are projected to rise to $5.9 trillion by 2027, more than double the $2.5 trillion recorded in 2022.
Addressing and mitigating bias requires a comprehensive, multidimensional strategy: rigorous assessment of datasets to prevent historical disparities, ongoing evaluation of fairness through relevant performance metrics, and the use of explainable AI methods to ensure transparency in decision-making.
In critical financial contexts, human judgment remains essential to recognise and rectify issues that automated systems may overlook. Organisations must integrate fair assessment, ethical governance, and continuous supervision across the entire AI processes to maintain public confidence and meet regulatory standards.
Apart from automating work, AI can reconfigure financial ethics. Predictive algorithms, for example, can detect dishonest trading patterns in advance, allowing human regulators to preempt them. Hybrid systems that pair algorithmic precision with human moral awareness can ensure fairness in credit rating, lending, and anti-fraud processing. AI Principles Overview – OECD AI can be implemented by financial institutions to shift from ‘profit-driven’ metrics toward measures of ‘ethical profitability,’ balancing corporate social responsibility with fiscal growth.
In doing so, the financial system is at once technologically and ethically secure, deeply established on trust, transparency, and equity.
Autonomous vehicles
In autonomous vehicles, a collective moral imagination of humans and machines can create transformative and innovative ethical systems in organisations, enhancing diverse perspectives and creativity. By fostering dialogue between machines and humans, organisations can move beyond binary choices toward ethically responsive, more inclusive, and future-oriented decisions.
For a collective moral imagination, we must focus on: AI and human insights, AI and human perspectives, and navigating AI and human challenges or biases in decision-making processes. For example, Moral Machine, a hybrid moral imagination, would influence the ‘trolley problem’ by blending situational awareness from AI sensors with human ethical reflection to identify the least harmful outcomes.
As AI vehicles learn from real-world feedback and data, they can become compassionate copilots straddling adeptness, safety, and moral responsibility. The car of the future, therefore, is not just autonomous but also morally aware.
Supporting collective moral imagination
As humans and machines learn from each other, we must ensure that the decision-making processes are made in line with our collective moral imagination, incorporating humans’ and machines’ outputs.
We propose developing systems that enable machines and humans to collaborate in building an ethical and sustainable future.






