Daiki logo blue
Picture of Jin-Hee Lee
Jin-Hee Lee
Human Computer Interaction Expert

Where AI Meets Humankind: The Importance of HCI in AI

Human-Computer Interaction (HCI) is a crucial lens that informs the technology we build, the way we build it, and whom to consider in the process. When we discuss responsible AI, then, we can see the crucial role of HCI in the development and adoption of AI.

When we think of artificial intelligence (AI), it may be difficult to imagine the human side. How does data interact with the feelings and intuitions we consider fundamentally human? This article introduces the field of human-computer interaction and its crucial role in AI, considering the future frontier of the intersection of these two disciplines.

What is Human-Computer Interaction?

Human-Computer Interaction (abbreviated “HCI”) is a discipline of Computer Science that focuses on the way that people interact with technology. Key terms that fall under this umbrella include user interface and user experience (UI/UX), usability, and testing – all centered on the person who meets the technology, rather than considering the tech as a standalone entity. This could cover anything from the color palette of an app to tapping a button to order food delivery to why we build products in the first place.

One of the most popular frameworks within the field of HCI is design thinking, which is a process of innovation that identifies a problem and creates a solution. The process consists of 5 steps:

  1. Empathize with your users and their needs,
  2. Define the problem at hand based on concrete need-finding,
  3. Ideate a solution to the problem you’ve identified,
  4. Prototype that solution,
  5. Test the prototype to see what works and what doesn’t.

(There are varying definitions of design thinking that involve more steps, but the 5-step process is the standard, which is still taught in design institution curriculum such as Stanford University’s d.school.)

By nature, design thinking is iterative: each step and each iteration always returns to the user at hand, with the goal of improving their user experience. And so, designers will circle back after their first round of testing to possibly hundreds to thousands of consecutive rounds of the design thinking process.

So, what does each step look like in action? Say that a designer, Lane, is exploring the domain of dog-walking.

  1. Empathize: At this step, Lane conducts as many interviews as possible with relevant parties. In this case, Lane could interview dog owners, dog walkers, dog sitters, pet stores, long-term puppy daycares, to name a few. 
  2. Define: Say that Lane has identified that dog owners wish their dogs could be walked more, but do not have the capacity to walk their dogs more because they don’t have the time.
  3. Ideate: Now, Lane and their team brainstorm as many solution ideas as they can to address the problem they’ve defined. One possible solution is a matching system that connects enthusiastic dog walkers to dog owners.
  4. Prototype: Lane and their team build out a prototype matching system.
  5. Test: The team tests the prototype on their user group, in this case, dog walkers and dog owners. 

Then, back to the ideate step! Or perhaps a reframing of the define step. Whatever the next step may be, it could change after the next round of testing; the process is definitionally iterative.

In Razzouk and Shute’s article reviewing the current literature around design thinking, the characteristics of a design thinker include “Human-and environment-centered concern,” “Ability to use language as a tool,” and “Affinity for teamwork,” to name a few. All arrows lead back to what works for the human being, from the external environments we interact with to the complexities of language to the ability to collaborate. At its core, design thinking is a distinctly human framework. 

Why is HCI important?

The concept of building technology for the people who will use it might seem like a given. But in an age of rapidly-moving technology and increasingly powerful tools to build quickly, the foundational focus on the user can easily be overlooked. Moreover, companies in early stages who have limited resources may focus on engineering rather than the human-focused elements to begin building deliverables.

But what if we shifted the narrative to understand such “human-focused elements” as an inherent angle of engineering? As human beings, we have crucial values that are culturally sensitive, contextual, and dynamic – these values should be embedded into the software we build from the start. 

Indeed, research experts have been affirming for more than a decade that human empathy is crucial in design and user-experience methodologies and that it can transform the relationship between designers, users, and artifacts. This raises the important topic of the who, which can sometimes be overshadowed by the what and how of the creation process. Derrick Hogburn argues for the importance of HCI as we consider the global network, including the so-called developing countries. He charges the HCI community to adopt a global perspective that calls for collaboration between the knowledge of developed and developing countries alike, enabling global integration and development. Of course, we must be wary of the savior complex when discussing a matter such as this, emphasizing the collaboration over any forced adoption or coercion. 

Case studies: Amazon’s facial recognition technology

To concretely illustrate the importance of HCI, especially the earlier stages of user research and testing, let us examine a case in which the critical stages of user research and human considerations were neglected.

In 2018, the American Civil Liberties Union (ACLU) conducted a study of Amazon’s face surveillance technology “Rekognition.” The test used Rekognition to compare images of American Congress members against a database of mugshots and resulted in 28 incorrect “matches,” which disproportionately “matched” people of color. This concerningly inaccurate technology was made available both to the government for surveillance purposes and to the public. 

Research shows that facial recognition algorithms are most inaccurate for darker-skinned people, especially darker-skinned women. Why does this happen? Arguably the biggest cause is the datasets that these algorithms are trained on; they train primarily on lighter-skinned men. Thus, those are the faces that they have learned to recognize. 

Now, what are the questions that the discipline of HCI could raise to avoid or amend a case such as Rekognition’s? To start, who are we building for? If the answer is all faces, then we should be testing our algorithm on all faces. We could take this a step further as we think empathetically: who are the groups that this kind of technology has neglected in the past, and how can we take care to do better? Who are our extreme users, the users on both ends who may be on an “extreme” end of our target population? If or when we find that the technology isn’t working for some, how do we better understand the failure to iterate? What are the conversations and perspectives we can gather to achieve this? Why are we building this in the first place, and has that changed as we’ve learned more? These are but the tip of the iceberg of questions to consider – not only around how the user interacts with the technology, but also who we imagine as “the user” and returning to why the technology is being created in the first place. 

Criticisms of HCI

As with any evolving discipline, there are several criticisms of the HCI approach. For one, its nonlinear nature can leave room for ambiguity and present a lack of structure once you advance in the design phase, which can leave teams feeling unsure of what the next step should be. Of course, this flexibility was intended to give autonomy to the designers and encode adaptability into the framework, but it can still be difficult for design thinkers to know what the next right thing to do is. 

Others argue that the field of HCI lacks something called interaction criticism, or evidence-based analysis that unpacks relationships between an interface and the meanings or reactions that come about from users. This prevents those using the traditional HCI approach from generating a domain of innovative design insights that would be made possible through interaction criticism. Notably, this technique of interaction criticism is, by no means, incompatible with human-computer interactions, and the authors themselves of this research article believe that it can and should be integrated into the field.

Yet another and more recent angle that criticizes the criticism in HCI is that the term “critical design” is causing a divergence between the Design and the HCI sectors. Ultimately, though, the authors of this article also propose a way to reconcile this; they propose ways to improve the connections between the Design and HCI disciplines. They argue we shouldn’t force criticality and that we can continue to shape the definition of design criticism in a way that aligns with our evolving practices and principles. Again, these are not incompatible, and only further highlight the importance of putting human beings first as we build more and more in the technological realm.

How does HCI meet AI?

Already, tech experts recognize the relationship between these two disciplines and see the incredible promise that lies at their intersection. This discussion platform among experts in the field raises several curious points, including the widespread range of applications in which their intersection could transformatively impact the end-user – education, healthcare, art, and so much more. We can already observe groundbreaking advancements such as natural language processing allowing speech-to-text, a superb example of how HCI meets AI. In this case, the AI empowers the technical chops of the language processing, while the HCI ensures that the end-user can have an intuitive, clearly comprehensible experience and application.

When the two studies work together, they can achieve groundbreaking things they would not have been able to do alone. AI company Future Web AI speaks on this in this article about how AI impacts HCI, outlining several applications that have already made huge strides and predicting the road ahead. Two major applications are voice recognition and gesture recognition, as well as the intersection of the two. The author describes a hopeful and innovative future in which the technology we build truly understands us. And at the other end, we must ensure that we fully understand the technology we build for a symbiotic relationship.

However, it may not always be so easy to find the harmony between the two disciplines. Perhaps there might appear to be an ideological incompatibility between the two concepts. Design thinking, in particular, always circles back to the user in order to iterate: every step returns to a human experience to give input. How, then, can we integrate and emphasize the human elements with an AI engine?

A new sub-field that emerges in response to this question is the rapidly developing field of human-centered artificial intelligence (HAI), a leader in the field being the Stanford Institute for Human-Centered Artificial Intelligence. In their 2022 report, the Institute’s Co-Directors call the center an “academic startup” as it rapidly explores two rapidly growing disciplines. Even in the 2021-2022 season, the Co-Directors declared that the progress has convinced even the most critical skeptics of the importance and position of AI. The leaders of this leading institute close their remarks by reaffirming their commitment to a “thoroughly human-centered perspective” and how AI can improve the human condition only if humans can successfully guide its direction. Indeed, it is and will continue to be up to humankind to keep AI harmonious with human intelligence.

Questions for the future

As the relationship between HCI and AI continues to evolve, envisioning their future frontiers becomes both exciting and challenging. The intersection of these two domains holds promise of groundbreaking advancements, but it also raises important questions about the role of humans in an increasingly automated world.

Will AI always remain a co-pilot, not a pilot?

Will smart machines always require a human partner to keep them in check? The HCI angle would argue that, even if the human side is not involved in the execution, it should certainly remain involved in AI regulation

How can we quantify those “just human” intuitions?

Especially in the testing stage of the design thinking process, we rely on that “natural human intuition” to determine how to make something more usable and user-friendly. With this in mind, how might we quantify these measures of usability and success in a way that AI might be able to better understand them?

As we consider these exciting questions for the future, we must still acknowledge that artificial intelligence and human intelligence are distinct; one is not a replacement for the other. However, I reckon that the two will make incredible partners in this ever-evolving industry.

Partnership between artificial and human intelligence

Perhaps we move into an equally collaborative coexistence, or a more autonomous era in which AI operates independently and even outperforms human capabilities in a variety of tasks. Already, we have seen AI performing in incredible ways in certain areas of medical diagnostics, such as radiology.

The realm of possibilities and the unknown only further highlights the need for responsible AI development. As AI becomes an integral part of our lives, guardrails, ethical frameworks, and continuous monitoring mechanisms will be essential. We envision and have already seen how Daiki can serve as all three: a guardrail, a framework, as well as a monitor as it becomes more and more common for companies to build using AI.

Indeed, Daiki is designed to alleviate concerns and guide the trajectory of responsible AI development – a mediator between human values and technological advancements. It provides transparency, accountability, and adaptability, ensuring that AI aligns with societal norms and ethical standards.

Looking ahead: quantum artificial intelligence

Artificial intelligence itself is ever-changing: from applied to generative to multimodal and interactive AI to quantum artificial intelligence (QAI). As these fields evolve, so will the relationship between AI and HCI – one example being the quantum-classical interface that allows a quantum computer to communicate with a familiar, classical laptop. As computing paradigms rapidly evolve, so must our consideration of the human angle.

Human-Computer Interaction is not merely the title of a field; it is a crucial lens that informs the technology we build, the way we build it, and whom to consider in the process. When we discuss responsible AI, then, we can see the crucial role of HCI in the development and adoption of AI.

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

The Stanford Center for Responsible Quantum Technology aims to connect the quantum community to explore how to balance maximizing the benefits and mitigating the risks of a new class of applied quantum technologies
Given the complexity of AI projects, it's important to have a strategic approach when embarking on your first AI project to ensure that you're set up for success
The United Nations General Assembly adopted a resolution advocating for trustworthy AI systems, offering a good basis for continued international policy on AI governance
Daiki Logo

Apply today

Daiki Logo

Book Demo Today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development