The ‘Spitzenprofessurenprogramm’ (SPP) ...
is part of the Hightech Agenda Bayern and is funded by the Bavarian State Ministry of Science and the Arts. A professorship under this program at a state university of applied sciences is endowed for a period of five years with up to 3.5 million euros in addition to the staff position provided by the university - ideal conditions for experienced scientists from Germany and abroad. A total of around 43 million euros has been earmarked for Spitzenprofessuren (i.e. professorship for outstanding, renowned scientists) at all three types of university, which have been approved for the years 2021 to 2023.

What does your researcher's heart beat for?
I deal with the topic of trust & safety with a focus on social media. My interest in this area has grown over time. I got involved with social media analytics early on - the topic is now almost standard, but it was different when I was doing my doctorate. In 2012, you still had to explain what the added value of social media was and why it was important for companies to take an interest in it in the first place. Back then, typical feedback from board members in the insurance or financial sector, for example, was: “I'm not selling a pension plan with 144 characters on Twitter. What do I care about social media?”. It was very exciting to follow the process in the years that followed and to observe how social media gradually established itself and its added value was recognized. In the meantime, another shift has taken place: The awareness of how social media and the things that take place there can also threaten society and cause problems for society, or at least for many individuals, has grown. This issue particularly fascinated me, as I was already involved in this topic.
Awareness of the fact that social media can also entail social risks and individual problems has grown.
What does the Spitzenprofessur mean?
I hope that the Spitzenprofessur, with the help of the generous funding, will finally allow me to advance all the topics that fascinate me and are relevant to society. Even though I have always worked closely with other researchers, you are only a single researcher with limited capacities, so many questions remain unanswered. For example, there are numerous areas in which our scientific research has not yet been able to incorporate existing knowledge from the business world. To close these gaps, we need more collaborators and greater freedom.
My goal is to succeed in taking this step in Germany, also thanks to the resources, the support of the Hightech Agenda and the ideal academic environment at HNU. I see great potential at HNU to advance these topics. In my view, Germany offers an outstanding education system and trains excellent doctoral students - an environment in which you can really make a difference.
What were the main stages of your academic career?
The decisive starting point was a very good degree in psychology in Osnabrück, which gave me a sound scientific education and awakened my interest in scientific work. I still benefit today from this solid foundation and the tools I was given back then.
I came to an important realization while working in consulting at Next Practice in Bremen when we carried out our first social media analyses in 2010 - at a time when I didn't even have a Facebook or other social media account myself. This was the first time I realized the potential of social media and how profoundly it could change our society. This was the starting point for my next steps: my path led me to Frankfurt to do a doctorate in business informatics - a big step for someone with a background in psychology and by no means a matter of course. As far as I know, no psychologist in Germany had done a doctorate in business informatics before me. I still vividly remember the evening before my interview: I was staying with my cousin in a student hall of residence in Frankfurt and was sitting there thinking: “What am I actually doing here? Now I'm coming to the e-finance lab and I'm planning to do a doctorate in this field - without any previous knowledge of business informatics or finance, after all I'm a psychologist!”.
Despite these doubts, I went, and since the topic was social media analytics, I at least had a point of reference from my previous consulting work. Today, I am very grateful to my former professors for putting their trust in me back then. My doctorate then focused on social media analysis from the perspective of business informatics and its benefits for companies. I received an excellent welcome at the E-Finance Lab and was lucky enough to be in a great working environment with patient colleagues who familiarized me intensively with the aspects of business informatics.
From then on, everything developed quickly: My doctorate was followed by a short period as a postdoc in Mannheim, which motivated me above all to develop myself further and publish more in order to gain greater academic freedom in the long term. I then spent two years as an assistant professor at Clemson University in South Carolina, at the time one of the leading institutions in the field of information systems. There I realized that research is not only about academic freedom, but also about results. It is not enough just to concentrate on what you personally find exciting - in the end, concrete results must be achieved, i.e. what counts is the hard currency of publications.
After two years in the USA, I moved to Brisbane in Australia, where I was able to experience academic work from a different perspective. I quickly realized that the scientific systems in Germany, the USA and Australia are sometimes very different. In Australia, I particularly liked the holistic approach: it wasn't just about top publications - as important as these are - but also about establishing industry contacts, attracting research funding and training the next generation of scientists. This way of promoting the 'organic growth' of a researcher stood in contrast to the stricter focus on results in the USA and really appealed to me. I then successfully conducted research in Brisbane for five years.
Now I'm looking forward to the next steps and seeing how I can incorporate all this experience into the greater freedom that academic work in Germany offers.

What do you expect from your new start at HNU?
I am a person who appreciates routines and doesn't need daily adventures - instead, regular work processes mean a lot to me. Over the last few years, I've traveled a lot, so my everyday working life has been anything but routine. Although this time was very enriching, I am now enjoying the prospect of a new phase with more structure and reliability all the more. Another point: I am a convinced East Frisian and could never have imagined moving to Bavaria in the past. Now I'm looking forward to getting to know a whole new part of Germany, exploring the beautiful surroundings and going hiking in the Alps.
And of course I'm looking forward to working with my colleagues at HNU!
What is important to you in teaching?
I am particularly pleased that lectures and seminars at HNU are possible in small group sizes - no events with over 1000 students that simply have to be 'dealt with'. This allows for intensive exchanges and critical discussions instead of simply transferring knowledge on a large scale. This offers the opportunity to sensitize students to key issues and enable them to develop differentiated and reflective opinions instead of being polarized and led by simple answers. I consider this to be a valuable, overarching skill - a kind of 'meta-skill' that enables you to go through life more successfully. Anyone who first asks: “Why is this the case?” and then takes a critical look at new topics has already gained a lot.
So that's my overarching goal: to enable students and doctoral candidates to critically question and develop a genuine interest through direct exchange, instead of making hasty judgments and expressing opinions without knowing the background.
What makes HNU so attractive to you?
I perceive the working environment at HNU as very informal on the one hand and refreshingly solution-oriented on the other. Elsewhere, people are perhaps quite happy to point out problems and find reasons why certain things cannot be implemented. At HNU, on the other hand, I get the impression that there is a great willingness to deviate from previous procedures, to rethink things and to make a difference. I like working in a small team - compared to large universities - and being able to act more dynamically and take advantage of the shorter decision-making paths. I don't mean that everyone involved should not be heard and feedback taken into account in decision-making processes, but it is just as important that things are actually implemented in the end and don't fail due to over-formalization. For me, the HNU is the ideal place to 'risk' new things and find new ways, because I feel there is a fresher air and a greater will to implement than usual. I quickly noticed this spirit of optimism, the willingness to be flexible and to find solutions at HNU when I was preparing for this new position, and I want to take this with me and build on it.
Finally, let's play a bit of music for the future: the year is 2030 - what would you like to have achieved in your field of research by then?
Over the next few years, I would like to build something that goes beyond a single Spitzenprofessur on the subject of trust and safety. Ideally, HNU will be known and networked for its expertise in this field not only throughout Germany, but also internationally - for example through research collaborations. My goal would be to anchor the topic institutionally so that the HNU is perceived as an authoritative voice for decisions in the area of Trust & Safety and is actively consulted.
In a nutshell
When I'm not researching or teaching, I'm ... doing Crossfit, practicing bar muscle-ups or handstand walks and throwing the ball for our border collie Onyx.
My current reading ... over the last few weeks has mainly consisted of apartment ads, health insurance flyers and visa regulations.
My specialty in three words: Innovative, relevant, up-and-coming.
My next publication ... will be entitled “Shadowbanning - An opaque form of content moderation”.

Deep Dive: more about Prof. Risius' research focus
Which topics would you like to focus on in the future?
One example is disinformation campaigns and content moderation in social media. So far, I have only been able to cover these topics superficially and point out that there are many aspects in this context that remain unconsidered. For example, we often only look at how to recognize disinformation or how people react to the flagging of misinformation. However, we have long known that our perceptions and reactions are strongly influenced by existing beliefs - keyword confirmation bias. This means that regardless of whether it is fake news or not, we tend to believe information that confirms our own existing beliefs, while we devalue or ignore contrary content - even corrections and warnings. Ultimately, therefore, many people specifically search for content that corresponds to their own point of view.
Even if we are aware of this confirmation bias, we still lack the knowledge of how to deal effectively with disinformation. Which interventions really help? This becomes particularly important when you consider that disinformation campaigns are often state-sponsored, as current examples show: for example, there are attempts from Russia to spread disinformation in Germany, and the US election campaign is also partly characterized by this. Such disinformation, which is used by malicious actors for their own purposes, deliberately manipulates social media content in order to circumvent common detection mechanisms.
My aim is to find alternative approaches for dealing with disinformation. New technologies, especially AI, open up exciting opportunities to better inform users and intervene in a more targeted manner.
How do we deal with this, how do we raise awareness of these threats? We hardly have any comprehensive intervention options. We are currently trying measures such as blocking or flagging content as false information. However, due to confirmation bias, people often simply look for this information on other platforms - or ignore the labels.
If we want to deal with these problems effectively, we first need to identify which target groups can still be influenced at all. Opinions are often already so entrenched that much of the energy is wasted on those who can't be convinced anyway or don't need to be convinced at all. We also often overestimate our ability to recognize disinformation: We believe that it's always just the others who are sitting on fake news - but not us. Ultimately, it's about identifying the people who can actually be reached and informed through targeted measures.
Does that mean that such intervention measures don't work for many people...?
There is no one-size-fits-all solution here either. One principle that I always keep in mind is that if our interventions manage to stop even one percent of people from sharing hate messages, we have already achieved a massive impact through economies of scale - because that means that a lot of harmful content does not even appear online.
Twitter Notes can be helpful for some users, but they don't have a broad impact. It is good that they exist, but more far-reaching approaches are needed to reach other groups. The question therefore arises: What interventions could address these groups and how can we best reach them?
Some platforms are now working on such further-reaching measures. A current example is Instagram, which has come under pressure due to negative reporting on sextortion [blackmail using intimate images or videos; editor's note]. After several young people committed suicide as a result, Instagram responded with measures such as special protection functions for youth accounts. These functions are designed to prevent young people from getting into dangerous situations, for example by restricting screenshots or recording stories that disappear after a certain period of time.
However, such measures are often only possible due to public pressure from research or organizations that repeatedly point out these risks. For years, there have been calls for platforms to take action to protect the lives of vulnerable users. Of course, there are also different views on youth accounts and the need to weigh things up - as with any measure. But I believe that if these protective functions protect even 200 young people from harm, then it is a sensible step. In this sense, research can make a decisive contribution to an important field of action.
So does the topic of trust and safety always have a political dimension?
Many of these issues have a political dimension: little can be achieved without political will and pressure. It is therefore important to influence political decisions. At the same time, we can also influence the platforms themselves, as many of them run their own research programs and have an interest in effective moderation. To do this, however, we need to speak their language - something I specifically pursue in my work. A key aim of our intervention campaigns as academics is to reduce engagement with harmful content - that has to be the goal. However, the platforms themselves naturally have no interest in reducing engagement overall. We therefore need to find approaches that reduce engagement with negative content without harming the platforms in other areas.
This includes working on algorithms and developing new incentive structures. It's not just about bans, blocking or deplatforming [the exclusion of people or groups of people from platforms; editor's note], but also about finding alternative measures that serve both the interests of the platforms and the public good. One example is Meta, which adjusted its algorithm in order to break the so-called 'addiction spiral' and thus act in a more socially responsible manner. However, this change created a gap that was immediately filled by TikTok - a platform that specifically encourages engagement through addictive algorithms. This in turn prompted Meta to introduce Reels to once again compete with TikTok and win back engagement. This shows how every measure has side effects that need to be weighed up.
This area of tension therefore presents the major challenge of finding a balance between social interests and the economic interests of the platforms. The aim is to raise awareness of the advantages and disadvantages of the respective measures and to promote moderate regulation that takes into account both the protection of individuals - especially children and young people - and the continued existence of the platforms. Otherwise, we run the risk of new platforms entering the market that may be even less regulatable.
So should it be about constructive solutions rather than restriction?
Both are crucial: information alone will not be enough to combat hate speech on the internet. Clear, decisive regulation is also required, because there is content that should have no place on the internet - such as CSAM (Child Sexual Abuse Material). Such content must be consistently removed.
However, removal alone is not a permanent solution, as the real problem lies deeper: There are people who feel the need to consume such content. These people must either be prosecuted or - where possible - reached and supported through targeted measures.
In my view, there are no easy solutions here, and a one-off assessment or intervention is not enough. Rather, dealing with these problems requires continuous refinement and sharpening of measures. It is a long process that requires constant attention and adaptation in order to be able to react appropriately to unforeseeable developments.
So AI is ultimately both a solution and a problem?
Exactly. AI - just like all other technologies - is both a curse and a blessing. This also applies to my previous examples in which AI is used for sextortion scams or for the production of child pornography.
If you think of AI as a hammer, you could say that the hammer is used both to harm someone and to build a house. Now you can either ban the hammer - in which case it becomes a complex cost-benefit issue - or you can try to regulate the hammer so that those who make it are still interested in building it. In other words, companies that develop AI must be given incentives to continue creating innovative, responsible applications in the future. Excessive regulation could discourage them from doing so, which in turn would increase the scope for abusive applications and actors that we do not want to support.
This balancing act requires constant dialog and an iterative approach. I would like to be part of this process, contribute my scientific perspective and help gain insights that may help steer individual decisions in the right direction.