As a student pursuing a career in User Experience/User Interface Design or Product Design, it is important to understand and embrace the ever-changing and advancing technology of Artificial Intelligence. What is AI and how can it be used in UX Design? Well, AI is, in my opinion, a blanket term for a large network of technologies. At its core, it is a process in which computer machines are using or compiling data to simulate human intelligence. Information gathered by the computer system can be utilized to learn, reason, or even perform decision-making tasks. AI systems can be integrated into UX Design as a means of digital products being enhanced to better learn from its users and personalize their experiences with the program or application. The collection of data and ability to analyze it at insanely fast rates allows AI to predict user interactions, automate repetitive tasks, and improve user engagement. Integrating some form of Artificial Intelligence into new and emerging apps or programs is almost essential at this point, if you want your product to be competitive and successful. Although the integration of AI in User Experience seems like a no-brainer, there are problematic aspects that need to be considered as the advancing technology is utilized in more and more aspects of our lives. It is crucial to understand and address some of these problems if an ethical UX design is to be achieved. Some of the leading problematic aspects of AI technology include bias & discrimination, privacy & security concerns, accessibility, and lack of transparency.
Two different types of bias can occur, and lead to discrimination. Algorithmic Bias occurs when a system’s data algorithmically prioritizes or creates preference for certain groups. In addition, there can be Training Data Bias, which occurs when data sets that a model is learning from reflect societal bias. An example of this that is commonly presented is where facial recognition software has less accuracy identifying individuals with darker skin tones. In addition to bias that can lead to discrimination, another concern of AI is privacy and security. In regard to privacy, there are concerns about how AI is analyzing, collecting, and storing personal data of users that a given model is interacting with. Without proper guardrails this could lead to unauthorized use or sharing of personal information. Security is also an issue because hackers or even government agents could alter or integrate data sets that lead to corrupting an AI model’s decision-making or behavior. An entirely different problematic aspect of emerging AI systems is accessibility considerations. Two examples of this that I came across include socioeconomic disparities, in which certain groups that have less access to AI continue to fall further behind in technological and societal advancement, as well as chat-bots not having the data sets to work in lesser spoken languages. Finally, another consideration in how Artificial Intelligence could be problematic is the concern over transparency or lack-there-of. In some of our reading and videos this was referred to as the “Black Box Problem.” This problem is the name for the fact that the complexity of the AI decision-making can be so challenging to interpret for both users and developers that there is a level of opaqueness to how the model is coming to a conclusion on its answers or actions. This could be problematic in applications of law enforcement or healthcare. In addition, the lack of transparency can make it challenging or vague in determining if responsibility falls on a developer, a data set, or an algorithm when something goes wrong.
A major challenge of User Experience Design is to balance the drive for innovation with responsible or ethical AI Design. I think one of the most important descriptors for achieving this balance is thoughtful design. A designer must be thoughtful and aware of the problematic aspects of AI, as mentioned above, and then design from a human-centered standpoint that integrates ethical design principles. Designing from a human-centered standpoint means empathizing with users by considering their viewpoints, experiences, preferences, and concerns This can be achieved through well-rounded and thorough user research. In addition, human-centered design should address real human problems or pain-points through thoughtful innovations or creative solutions. The ethical component of responsible design innovation can include the establishment of processes that identify and eliminate biases. Positive ethical design should also include clear explanations or disclaimers of how AI is being utilized. This type of transparency can help reassure users and build trust over time.
As Artificial Intelligence continues to evolve and integrate into User Experience Design, there are multiple considerations for best practice. I mentioned it in several instances already, but I believe transparency to be one of the most important considerations for the future of AI development. AI shouldn’t be some grey area nebula that operates in the shadows, but rather, a tool that is understood and explainable by its users. Future interfaces should be transparent and communicate to its users how its AI system is working, how users can impact or influence outcomes, as well as why certain decisions are generated. In addition to transparency, the other consideration for best practices should be an emphasis on addressing biases and designing for diversity. This would tie back to transparency but would include more clear understanding of the data sets being provided to a system. It would also include regulations around data audits and testing to eliminate bias in either the data sets or the algorithms.

Leave a comment