Image

Halland Tech Meet –
Human-centered AI


On May 13, 2020, Halland Tech Meet was organized, a digital event that brings together business leaders and organizations in Halland that have found new ways to act in the current situation. More than 200 people reported their participation to the digital event. In the sessions, r360's Chairman Ludvig Granberg, Lisa Gramnäs from Gramtec Development, participated, Pontus Wärnestål, senior lecturer at the Academy of Information Technology at Halmstad University and Malin Kjällström from Connect West / Gothenburg Tech Week, among others.

We have updated with the video recording from the session and your questions and input regarding Pass 1 - human-centered artificial intelligence here.

Questions from the audience at the Halland Tech Meet on Human-centered AI

What kind of job is being replaced by robots?

The short (but simplified) answer is that jobs that are monotonous and repetitive can be replaced. But that is not entirely true, as a lot of such jobs contain steps that, upon closer reflection, cannot be performed by robots. For example, it has been found that detail picking at a warehouse (small irregular shapes and packages) is performed faster and safer by people, while larger warehouse jobs (such as identifying and optimizing the loading and unloading of whole pallets) are performed better by autonomous trucks. So a slightly more elaborate answer is that it depends on, but that a human-centered approach is about performing data analysis and figuring out how AI + humans can strengthen each other and work together. Also, you have to be careful if you mean “AI” in a more general sense that can help with cognitive work, or really just robots (which are physical and help with physical work).

It is also interesting to question conventional perceptions of what is appropriate for AI / robots compared to humans. For example, a common perception is that creative tasks (eg artistic work) and empathic tasks (eg in care and care) are suitable for people. But it has been shown that AI can create both music and visual arts that are highly valued by art connoisseurs, as well as therapeutic effects in patients who have interacted with interactive AI assistants ... The key lies, as stated, that we should rather see AI as a way to complement and reinforce, rather than automate and replace. https://www.nytimes.com/1997/11/11/science/undiscovered-bach-no-a-computer-wrote-it.html (Already in 1997 "Bach-music" was generated and fooled musiclovers all over the world...)

If we are to give some more examples, there are forecasts implying that photo models, news and weather anchors are examples of professions that can be completely replaced by AI.

For more information:
Genererade AI-modeller: https://youtu.be/8siezzLXbNo 
Nyhetsankare: https://youtu.be/Jm68C12QXV4 
Soul Machines hyperrealistiska assistenter: https://youtu.be/t_hQY7fMtpU
AI-influenser på Instagram: https://www.instagram.com/lilmiquela/?hl=en


A chat robot, how much AI is behind such a solution?

It depends on many factors. Some chatbots are completely scripted and do not contain much AI. Sometimes they use ready-made language packages that help to interpret the entered text. This interpretation can range from simple keyword identification to complete grammatical parsing or advanced language model that is trained for specific domains (eg healthcare). If the interaction is spoken (such as Siri or Alexa), then speech recognition itself is based on AI models. What then happens when the audio stream is converted to a text string is called dialog modeling and it can also have different degrees of AI itself. From simple rule-controlled dialogue tree to flexible plan-based dialogue models that can handle "jumps" in the conversation and can resume sub-dialogues. However, the latter is very rare in commercial applications. (Pontus's dissertation is about, for example, just about mixed initiatives between users and AI agents and examines how personalization of such dialogue models can go to and how they are experienced.)

So, just because the interaction model is "chat" and uses natural language statements does not mean that there is very much AI in any real sense.


What new roles / people / consultants does an organization / company need in the future to benefit from AI and work with data-driven processes / decisions in a more effective way?

Hard to say what exact roles are needed in an organization going forward, when it comes to expertise, it may be easier to regard following:

  • Someone who knows the basics of data analysis / machine learning. For example, you can hire or train your existing staff using online courses.
  • Someone who knows about the company's processes and can decide / prioritize which processes can benefit from data-driven solutions, as well as what improvements would "pay off".
  • Someone who understands your customers and services and can help design data collection methods as well as data-driven services according to their needs, such as a UX designer.
  • All of these skills must work together as interactively as possible.

My suggestion is that once you have a goal for what you want to achieve in the organization, it is easier to determine what expertise is needed. Then keep in mind that a master student in data mining could produce certain models as part of his dissertation work if there is a clear goal and data.


How is so-called generative design where you set a framework and where AI provides a number of suggestions for solutions within the "framework". Specialists are needed to select and evaluate but the bulk is made by AI.

This was partly answered during the conversation, but in short it is linked to the fact that a human-centered approach involves modeling so-called "hybrid" work where we see AI and humans as part of the same workflow. Instead of AI automating and replacing it, the AI ​​complements and reinforces human tasks. Examples of generative design are found in architecture where a model is trained on thousands of floor plans and sketches. Then, given certain parameters that the architect sets, it can generate a variety of suggestions that the architect can then use his expertise to cure and modify. An example of how this works is here: https://vimeo.com/showcase/4991875/video/323475301


What ethical dilemmas do you see with human-centered AI?

  • How to handle sensitive information. AI requires data, in a digital format, to be stored and transferred. The data is often stored on the cloud for ease of access, and creates vulnerability, especially when the data is sensitive and valuable. Humans might not be the best at keeping secrets either. Just like we have procedures in place to limit how people share information, we have to put procedures in place that limit how machines share information.
  • What to do about predictions. Consider the movie “Minority Report” where the police started arresting people that were “going to” commit a crime because of a prediction by an oracle. AI prediction systems are in effect an oracle that tries to determine what will likely happen with a certain level of confidence. When the events they are trying to predict are of a sensitive nature then the effects of either being wrong or being right must be considered and contingency plans must be put in place.
  • Bias and inclusion. Bias is a source of error. Humans are extremely biased in their decision making process as well. If you are interested in this topic I recommend the book “Thinking, Fast and Slow” by Daniel Kahneman. Machines are also biased by the data that they are trained on. If there are not enough examples covering the entire distribution of inputs, say not enough women represented in a sample, then the AI will not be good at modeling those missing elements. No model is without bias, but by being aware of the model’s limitations, we can work towards an acceptable compromise.
  • How to handle errors. Both humans and machines can be wrong in their decisions. In the case of a medical error, for example, there is insurance. If a doctor makes the wrong diagnostic he may be sued for malpractice. What happens if an AI agent makes the wrong diagnostic? You may also consider that blood tests and other medical tests are not 100% accurate and there is always a risk. The way we deal with this situation today is we assign someone to be ultimately responsible for the decision, the doctor, the lab technician, or the person supervising the AI, for example. However, the law needs to be extended to include certain use-cases where the “supervisor” is not clearly defined.

Educational Links:

Elements of AI (online reading course, LiU): https://www.elementsofai.com/

AI for Everyone (online video course, Coursera): https://www.coursera.org/learn/ai-for-everyone

Human-Centered Machine Learning (podcast, HH): http://dap.hh.se

Machine Learning Crash Course (online video course, Google): https://developers.google.com/machine-learning/crash-course/ml-intro

Halmstad University: https://www.hh.se/samverkan/kompetensutveckling/utbildning-inom-informationsteknologi-for-yrkesverksamma.html

inUse course Human-Centered AI: https://www.inuse.se/academy/human-centered-ai/


Halland Tech Meet – Session 2
Halland Tech Meet – Session 3

Image

What is AI.m? Here you can read more about Halmstad Universitys and HighFive's new project where human-centered AI is the focus.

Image

To make this project possible we have formed a network of exciting individuals with different competences and backgrounds.

Image

Here we present interesting articles, analysis and educational information in regards to artificial intelligence and service innovation.

Image

A number of companies have already gone through our process and here you can read what they have to say about what it was like and what they got out of it.