C-7, Jalan Dataran SD1,
PJU 9, Bandar Sri Damansara,
52200, Kuala Lumpur
AI and Ethics
By Nash Nithi
In today’s era of rapid technological progress, artificial intelligence (AI) has become a powerful force reshaping industries such as healthcare, finance, and transportation. While AI offers tremendous opportunities for innovation and efficiency, it also raises critical ethical questions that must be addressed to ensure its responsible use. This article highlights some of the pressing concerns—including algorithmic bias, data privacy, and responsible AI practices—and connects them to broader ethical frameworks championed by educational institutions like the University of Information Technology (UNIMY).
Algorithmic Bias: An Ongoing Concern
A key ethical issue in AI is algorithmic bias, which arises when flawed data or assumptions cause AI systems to produce unfair outcomes. For instance, facial recognition software has been criticised for being less accurate in identifying people from certain racial backgrounds, reinforcing inequality and discrimination in areas such as hiring, policing, and loan approvals.
Institutions like UNIMY play an important role in tackling this challenge by embedding ethics into AI education. By training students to understand how bias originates and how it impacts real-world systems, universities prepare them to design fairer and more transparent AI models. Additionally, academic research contributes to creating tools and methods to detect and reduce bias in algorithms.
Data Privacy: Protecting Personal Data
AI relies heavily on large amounts of data, which brings significant concerns about consent, security, and misuse of personal information. With data breaches and cases of unauthorised use becoming more common, public trust in how AI handles personal data is increasingly fragile.
To address this, newer methods are being developed to prioritise privacy in AI systems. Examples include federated learning, where models are trained on decentralised data without centralised storage, and differential privacy, which protects individual identities by adding randomness to datasets. Universities like UNIMY can lead in this area by conducting research and equipping students with skills in privacy-focused technologies, ensuring that future AI experts prioritise the safeguarding of personal information.
Responsible AI Development: Principles and Implementation
Responsible AI refers to developing and applying AI systems according to principles such as fairness, transparency, accountability, and safety. However, putting these values into practice is not always straightforward—it requires making AI understandable to users, ensuring developers are held accountable, and guaranteeing that AI benefits society.
Academic institutions play a central role in advancing responsible AI. UNIMY, for example, can integrate case studies and hands-on projects into its curriculum, encouraging students to consider ethics in practical scenarios. Collaborations across disciplines—bringing together engineers, ethicists, and industry leaders—further enrich education and ensure students develop a holistic view of AI’s ethical implications.
The Future of AI: Policy and Collaboration
As AI continues to evolve, policymaking becomes increasingly crucial in shaping its ethical landscape. Governments and international organisations have already introduced regulations, such as the EU’s GDPR, which sets standards for data privacy, while other frameworks address wider ethical concerns.
UNIMY and similar institutions can contribute meaningfully by sharing expertise, conducting research, and collaborating with policymakers and industry. These partnerships ensure regulations support both technological progress and ethical responsibility.
The ethical challenges posed by AI are complex but vital for its sustainable integration into society. Educational institutions, especially universities like UNIMY, play a pivotal role in developing ethical AI practices through teaching, research, and engagement with policy debates. By instilling values of accountability, fairness, and privacy, universities help shape professionals who are not only skilled but also conscious of AI’s broader societal impact.
Looking ahead, the dialogue between technology and ethics will only intensify as AI becomes more advanced. It is essential for academics, industry practitioners, policymakers, and the public to work together to address emerging concerns and ensure AI development aligns with human rights and dignity.
In conclusion, building responsible AI is an ongoing process. Institutions like UNIMY are at the heart of this journey, guiding innovation while acting as ethical compasses. Through education, research, and policy influence, they help ensure AI technologies bring lasting, positive change to society.
Explore the programmes offered by UNIMY today.