Merilebon Academy

AN ENTREPRENEURIAL SCHOOL

Menu

Will Artificial Intelligence (AI) take over Human Intelligence? What should students do to ensure AI doesn’t override but enhance their ability to learn through research?

When we talk about Artificial Intelligence (AI) one thing comes to mind, and that is, making computers behave like humans. John McCarthy the father of AI defined it as “the science and engineering of making machines that are smart. AI is an interdisciplinary field that combines computer science, mathematics, neuroscience, linguistics, ethics, philosophy, data science robotics and so many other disciplines to create a system capable of problem solving, learning and decision making.

FIRSTLY, WHAT WAS AI REALLY DESIGNED TO DO?

Let us consider the Turing’s test: In 1950, Alan Turing published a landmark paper “Philosophy of Artificial Intelligence” in which he speculated about the possibility of creating machines that think. He defined a thinking machine as one that could carry on a conversation that was indistinguishable from that of a human being. Over the years, AI researches have brought different expert systems and has stretched the capabilities of AI, but one thing still stands out the most, AI’s main goal is to aid humans.

 

SECONDLY, WHAT ARE THE CONCERNS OVER AI EVOLUTION, IMPLIMENTATION & DOMINANCE?

Artificial Intelligence is evolving rapidly and we are beginning to see countless capabilities and applications in every sector of the society. Amongst numerous concerns, the question arises: Could Artificial intelligence take over human intelligence?

As a student in a world being dominated by AI, it is important to understand the capabilities and limitations of AI, and know how to employ AI in the quest for knowledge. Artificial Intelligence only excels in specific tasks, but lack self-awareness and general intelligence. It is only as good as the data is has been fed/designed with.

Use of AI systems often lead to unfair or prejudiced outcomes that result from the data and algorithms used in its development/design. Biases in AI systems have become a significant concern due to its potential social, ethical, and legal implications. If historical data reflects societal prejudices, inequalities, or stereotypes, the AI may inadvertently learn and perpetuate these biases.

Organizations have uncovered numerous high-profile examples of bias in AI in a wide range of use cases.
In Online advertising: Biases in search engine ad algorithms can reinforce job role gender bias. Independent research at Carnegie Mellon University in Pittsburgh revealed that Google’s online advertising system displayed high-paying positions to men more often than to women.

As far back as the mid 1980’s, a British medical school was found guilty of discrimination after building a computer program to screen candidates. The program closely matched human decisions, but showed a persistent bias against women and applicants with non-European names. Amazon tried building a similar program to improve their hiring process and almost immediately realized the system was penalizing women because the dataset it was trained on reflected the male-dominant tech culture of the time. Just to mention a few.

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production…it has the potential, however small one may regard that probability, but it is non-trivial, it has the potential of civilization destruction,” Elon Musk.

 

FUTHERMORE, WHAT SHOULD STUDENTS DO TO ENSURE THAT AI DOESN’T OVERIDE BUT ENHANCE THEIR ABILITY TO LEARN THROUGH RESEARCH?

Students can rely on AI to augment their learning process not depend entirely on it. As we all know, students typically find themselves in situations like;

  • Sourcing for information and data for various research purposes
  • Information overload (abundance of information and information sources) which can lead to difficulty sorting, organizing and synthesizing information.
  • struggling to articulate ideas and research findings
  • Analyzing data and drawing conclusions that align with research goals

Artificial intelligence education tools can analyze assignments, presentations, and projects, providing real-time feedback. This immediate response helps students understand their areas of improvement without waiting for teacher evaluations

AI points a student in the right direction and should be used to tailor learning experience to individual students according to their individual abilities and pace. It shouldn’t replace but ratter help them adapt to a system that improves their learning process.

LASTLY, as students continue in this educational journey, mastering subjects and acquiring knowledge, they should realize that AI systems are designed by individuals who have a proven research, analytical thinking and problem-solving skills. All the limitations and biases of the AI systems reflect the inventors’ beliefs, upbringing and the type of data it has been exposed to. These biases are to be corrected by well-rounded, knowledgeable individuals who are just like you. So, study well and let AI point you in the right direction.

About Author

Isaiah Ananso is a dynamic individual who thrives at the intersection of technology, design, and education. As a web designer, graphic designer, writer, and ICT instructor, Isaiah’s diverse skill set allows him to bring a unique perspective to his work and make a lasting impact in multiple domains.