Discuss why people do not trust AI systems

Instruction Details

Q1 Advanced image processing using artificial intelligence (AI) techniques finds an increasing number of applications nowadays. Examples of such applications include, but not limited to, optical character recognition on printed forms, facial recognition for security systems, object recognition and adaptive manoeuvre for autonomous driving, robo-advisors for financial services, etc. Data is the number one resource for any AI system.

a) Name and explain five (5) concerns when collecting data to train an AI system [5 marks]

b) Discuss how to address each of the concerns in your answer above. [5 marks]

c) When creating an AI system that aims for the global market, it must be able to fulfil regulatory requirements of multiple countries. List and briefly describe a set of data handling principles that satisfy this requirement. [7 marks]

d) Discuss why people do not trust AI systems. [5 marks]

e) Suggest and explain three (3) approaches to increase people’s trust in AI systems. [3 marks]

Answer Guide

a) Concerns when collecting data to train an AI system:

  1. Bias and Fairness: Data can contain inherent biases that reflect societal biases and inequities. These biases can lead to unfair or discriminatory outcomes when the AI system makes decisions based on the data.
  2. Privacy: Collecting sensitive or personally identifiable information raises concerns about the privacy and security of individuals. Mishandling of personal data can result in breaches, identity theft, or unauthorized access.
  3. Quality and Reliability: The quality and reliability of the data directly impact the AI system’s performance. Inaccurate, incomplete, or noisy data can lead to poor model performance and unreliable predictions.
  4. Data Diversity: A lack of diversity in the training data can result in limited generalization of the AI system. If the data is not representative of various scenarios and populations, the AI may fail to perform well in real-world situations.
  5. Data Collection Methods: The methods used to collect data can influence the data’s quality and biases. Improper data collection methods can introduce errors and distortions that affect the AI’s performance.

b) Addressing Concerns:

  1. Bias and Fairness: Use techniques like debiasing and fairness-aware learning to identify and mitigate biases. Employ diverse and representative datasets, and regularly audit model outputs for fairness.
  2. Privacy: Implement data anonymization techniques such as differential privacy or secure multiparty computation. Minimize the collection of personally identifiable information and encrypt sensitive data.
  3. Quality and Reliability: Implement data preprocessing and cleaning steps to filter out noisy and irrelevant data. Use techniques like cross-validation to assess model performance on unseen data.
  4. Data Diversity: Actively seek out and include data from various sources, demographics, and scenarios to ensure a more comprehensive dataset.
  5. Data Collection Methods: Employ standardized data collection protocols and adhere to ethical guidelines. Implement rigorous validation and quality control measures during data collection.

c) Data Handling Principles for Global Regulatory Compliance:

  1. Explicit Consent: Obtain explicit consent from users before collecting their data, ensuring transparency about how the data will be used.
  2. Data Minimization: Collect only the necessary data required for the AI system’s functionality, minimizing the risk of privacy breaches.
  3. Localization: Adapt the AI system to comply with different countries’ data protection laws and regulations, such as GDPR in Europe and HIPAA in the US.
  4. Data Portability: Allow users to easily transfer their data between different services and platforms, in accordance with data portability rights.
  5. Anonymization: Employ techniques like anonymization and pseudonymization to protect user identities and sensitive information.
  6. Regular Auditing: Conduct regular audits to ensure compliance with changing regulations and to assess data handling practices.
  7. User Rights: Provide users with control over their data, including the ability to access, modify, or delete their data.

d) Lack of Trust in AI Systems:

  1. Opacity: Many AI systems operate as “black boxes,” making it difficult to understand their decision-making processes.
  2. Bias and Discrimination: AI systems might produce biased outcomes that discriminate against certain groups or individuals.
  3. Unreliable Predictions: Poor performance or unexpected behavior can erode trust in AI systems.
  4. Data Security Concerns: Concerns about data breaches or misuse of personal information can lead to mistrust.
  5. Lack of Human Oversight: Relying solely on AI without human oversight can lead to errors and mistrust.

e) Approaches to Increase Trust in AI Systems:

  1. Transparency: Make AI systems more interpretable by providing explanations for their decisions, enabling users to understand the rationale behind outcomes.
  2. Fairness Audits: Regularly assess AI systems for biases and fairness issues, and take corrective actions to address any identified problems.
  3. Human-AI Collaboration: Emphasize collaboration between AI and human experts to create a synergy that combines the strengths of both, ensuring that human expertise can override AI decisions when necessary.

Complete Answer:

Get Instant Help in Homework Asap
Get Instant Help in Homework Asap
Calculate your paper price
Pages (550 words)
Approximate price: -
Open chat
1
Hello 👋
Thank you for choosing our assignment help service!
How can I help you?