8 april 2021

Världens tredje första Halland Tech Meet gick av stapeln den 8 april 2021 under ledning av HighFive. Precis som efter tidigare tillfällen skapades inspirerande samtal där många nya lärdomar togs med från samtliga programpass. Vad innebär egentligen begreppet gamification/spelifiering och hur kan vi använda det för att effektivisera vår vardag? Vilka utmaningar kan vi hantera i våra bolag och samhället i stort med hjälp av AI och hur kommer matproduktionen att förändras om vi tar in kunskapen från agtech och foodtech? Svaret på dessa frågor och många fler kan du ta del av i klippen nedan.

26 november 2020

Den 26 november 2020 anordnades Halland Tech Meet för andra gången under året. Förra gången pratade vi mycket om människocentrerad AI, denna gång gjorde vi ett djupdyk i vår uppkopplade värld, smarta städer samt hur teknik och AI kan agera som en möjliggörande kraft för en bättre och mer individanpassad hälsa. I samtalen medverkade ett gäng intressanta personer som inspirerade oss alla på ett eller annat sätt. Missade du att se det live? Ta del av samtalen nedan!

13 maj 2020

Human-centered AI

On May 13, 2020, Halland Tech Meet was organized, a digital event that brings together business leaders and organizations in Halland that have found new ways to act in the current situation. More than 200 people reported their participation to the digital event. In the sessions, r360's Chairman Ludvig Granberg, Lisa Gramnäs from Gramtec Development, participated, Pontus Wärnestål, senior lecturer at the Academy of Information Technology at Halmstad University and Malin Kjällström from Connect West / Gothenburg Tech Week, among others.

Här finner ni även samtalet i poddformat

Frågor och svar från Halland Tech Meet 13 maj (Människocentrerad AI)

What kind of job is being replaced by robots?

The short (but simplified) answer is that jobs that are monotonous and repetitive can be replaced. But that is not entirely true, as a lot of such jobs contain steps that, upon closer reflection, cannot be performed by robots. For example, it has been found that detail picking at a warehouse (small irregular shapes and packages) is performed faster and safer by people, while larger warehouse jobs (such as identifying and optimizing the loading and unloading of whole pallets) are performed better by autonomous trucks. So a slightly more elaborate answer is that it depends on, but that a human-centered approach is about performing data analysis and figuring out how AI + humans can strengthen each other and work together. Also, you have to be careful if you mean “AI” in a more general sense that can help with cognitive work, or really just robots (which are physical and help with physical work).

Det är också intressant att ifrågasätta konventionella uppfattningar om vad som är lämpligt för AI/robotar jämfört med människor. En vanlig uppfattning är exempelvis att kreativa uppgifter (t ex konstnärligt arbete) och empatiska uppgifter (t ex inom vård och omsorg) är lämpliga för människor. Men man har visat att AI kan skapa både musik och bildkonst som skattas mycket högt av konstkännare, liksom terapeutiska effekter hos patienter som fått interagera med interaktiva AI-assistenter... Nyckeln ligger som sagt i att vi snarare bör se AI som ett sätt att komplettera och förstärka, snarare än att automatisera och ersätta. (Redan 1997 genererades “Bach-musik” som lurade musikkännare...)

If we are to give some more examples, there are forecasts implying that photo models, news and weather anchors are examples of professions that can be completely replaced by AI.

For more information:
Genererade AI-modeller: 
Soul Machines hyperrealistiska assistenter:
AI-influenser på Instagram:

What new roles / people / consultants does an organization / company need in the future to benefit from AI and work with data-driven processes / decisions in a more effective way?

Hard to say what exact roles are needed in an organization going forward, when it comes to expertise, it may be easier to regard following:

  • Someone who knows the basics of data analysis / machine learning. For example, you can hire or train your existing staff using online courses.
  • Someone who knows about the company's processes and can decide / prioritize which processes can benefit from data-driven solutions, as well as what improvements would "pay off".
  • Someone who understands your customers and services and can help design data collection methods as well as data-driven services according to their needs, such as a UX designer.
  • All of these skills must work together as interactively as possible.

My suggestion is that once you have a goal for what you want to achieve in the organization, it is easier to determine what expertise is needed. Then keep in mind that a master student in data mining could produce certain models as part of his dissertation work if there is a clear goal and data.

Hur är sk generativ design där människan sätter ett framework och där AI ger en mängd förslag på lösningar inom "frameworket". Specialister behövs för att välja och utvärdera men bulken görs av AI.

This was partly answered during the conversation, but in short it is linked to the fact that a human-centered approach involves modeling so-called "hybrid" work where we see AI and humans as part of the same workflow. Instead of AI automating and replacing it, the AI ​​complements and reinforces human tasks. Examples of generative design are found in architecture where a model is trained on thousands of floor plans and sketches. Then, given certain parameters that the architect sets, it can generate a variety of suggestions that the architect can then use his expertise to cure and modify. An example of how this works is here:

A chat robot, how much AI is behind such a solution?

It depends on many factors. Some chatbots are completely scripted and do not contain much AI. Sometimes they use ready-made language packages that help to interpret the entered text. This interpretation can range from simple keyword identification to complete grammatical parsing or advanced language model that is trained for specific domains (eg healthcare). If the interaction is spoken (such as Siri or Alexa), then speech recognition itself is based on AI models. What then happens when the audio stream is converted to a text string is called dialog modeling and it can also have different degrees of AI itself. From simple rule-controlled dialogue tree to flexible plan-based dialogue models that can handle "jumps" in the conversation and can resume sub-dialogues. However, the latter is very rare in commercial applications. (Pontus's dissertation is about, for example, just about mixed initiatives between users and AI agents and examines how personalization of such dialogue models can go to and how they are experienced.)

So, just because the interaction model is "chat" and uses natural language statements does not mean that there is very much AI in any real sense.

What ethical dilemmas do you see with human-centered AI?

  • How to handle sensitive information. AI requires data, in a digital format, to be stored and transferred. The data is often stored on the cloud for ease of access, and creates vulnerability, especially when the data is sensitive and valuable. Humans might not be the best at keeping secrets either. Just like we have procedures in place to limit how people share information, we have to put procedures in place that limit how machines share information.

  • What to do about predictions. Consider the movie “Minority Report” where the police started arresting people that were “going to” commit a crime because of a prediction by an oracle. AI prediction systems are in effect an oracle that tries to determine what will likely happen with a certain level of confidence. When the events they are trying to predict are of a sensitive nature then the effects of either being wrong or being right must be considered and contingency plans must be put in place.

  • Bias and inclusion. Bias is a source of error. Humans are extremely biased in their decision making process as well. If you are interested in this topic I recommend the book “Thinking, Fast and Slow” by Daniel Kahneman. Machines are also biased by the data that they are trained on. If there are not enough examples covering the entire distribution of inputs, say not enough women represented in a sample, then the AI will not be good at modeling those missing elements. No model is without bias, but by being aware of the model’s limitations, we can work towards an acceptable compromise.

  • How to handle errors. Both humans and machines can be wrong in their decisions. In the case of a medical error, for example, there is insurance. If a doctor makes the wrong diagnostic he may be sued for malpractice. What happens if an AI agent makes the wrong diagnostic? You may also consider that blood tests and other medical tests are not 100% accurate and there is always a risk. The way we deal with this situation today is we assign someone to be ultimately responsible for the decision, the doctor, the lab technician, or the person supervising the AI, for example. However, the law needs to be extended to include certain use-cases where the “supervisor” is not clearly defined.

Educational Links:

Elements of AI (online reading course, LiU):

AI for Everyone (online video course, Coursera):

Human-Centered Machine Learning (podcast, HH):

Machine Learning Crash Course (online video course, Google):

Halmstad University:

inUse course Human-Centered AI:

Halland Tech Meet – Session 2
Halland Tech Meet – Session 3


What is AI.m? Here you can read more about Halmstad Universitys and HighFive's new project where human-centered AI is the focus.


To make this project possible we have formed a network of exciting individuals with different competences and backgrounds.


Here we present interesting articles, analysis and educational information in regards to artificial intelligence and service innovation.


A number of companies have already gone through our process and here you can read what they have to say about what it was like and what they got out of it.