Shaping Responsible Futures: Insights on Ethics in AI and Robotics from IAS-19

In July, the Innovation Hub in Robotics hosted the round table “Ethics in AI and Robotics” at the IAS-19 conference, sparking vital debate on the future of technology. Experts explored challenges including bias, accountability, diversity, and long-term responsibility.

Shaping Responsible Futures: Insights on Ethics in AI and Robotics from IAS-19

Full program
Ulysseus event page

In July, the Innovation Hub in Robotics organised the round table “Ethics in AI and Robotics” at the IAS-19 conference, bringing to the forefront one of the most pressing conversations shaping the future of technology. The event brought together renowned experts from Ulysseus European University to reflect on the ethical challenges accompanying the rapid rise of artificial intelligence and autonomous systems.

The panel featured distinguished voices: Prof. José M. Galván (Pontifical University of the Holy Cross, VA), Prof. Peter Sinčák (Technical University of Košice, SK), Prof. Serena Villata (Université Côte d’Azur, FR), Prof. Jesús Sabariego (University of Seville, ES), and MSc Nina Drakulić (Montenegro Robotics / Montenegro Makers, ME). Together, they explored how robotics and AI can be developed and deployed responsibly—balancing innovation with fairness, transparency, inclusivity, and public trust.

By addressing questions of bias, accountability, diversity, and long-term responsibility, the round table offered both critical insights and practical strategies for ensuring that robotics and AI evolve as technologies that truly serve humanity.

A concise overview of the main themes and discussions that emerged during the session is presented in the following summary of key takeaways.

What are the main ethical challenges you identify in deploying autonomous robots in your own environments, and how do you address these challenges effectively

  • Short-term thinking over long-term responsibility: There is a persistent focus on immediate outcomes, while long-term ethical implications are often overlooked. Deployment strategies must be guided by sustainable and forward-looking ethical considerations.
  • Corporate-driven data aggregation: Training data used in autonomous systems rarely represent individuals or society as a whole. Instead, they are shaped by corporate objectives and economic incentives, raising serious concerns about fairness, bias, and accountability.
  • Lack of public awareness and participation: Public understanding of the ethical and societal implications of emerging technologies remains limited. There is an urgent need to increase awareness and democratic participation to ensure technology truly serves humanity.
  • Underrepresentation and lack of diversity: Marginalized communities are often excluded from the development and governance of autonomous systems. This lack of diversity undermines inclusivity and can lead to ethically flawed outcomes.
  • Unclear moral responsibility: Ethical analysis is often hindered by a lack of clarity about who is responsible for decisions made by autonomous systems. The foundational technoethical distinction between moral subjects (agents of responsibility) and moral objects (recipients of moral impact) must be more clearly understood and applied.

What effective strategies have you found for addressing social, gender-based, environmental and technological biases while promoting technical awareness through your initiatives?

  • Training data play a central role in shaping AI outcomes, as they inherently reproduce existing biases, languages, and cultural patterns. This awareness should prompt us to reflect on the need to transform our social realities as part of the solution.
  • Promoting transparent and clearly explainable causality in AI systems is a key strategy for challenging systemic ignorance and addressing embedded biases.
  • Developing explainable AI (XAI) tools for end-users helps make AI systems more accessible, understandable, and accountable.
  • Efforts must include opening the “black boxes” of AI by extending critical scrutiny beyond Western-centric systems and embracing diverse global contexts.
  • Frame AI as a common good offers a powerful approach to advancing equality and protecting social rights across different communities.

How do you see advancements in explainability and transparency techniques in robotics shaping human trust and agency in increasingly automated and intelligent environments?

  • Work on tools that help clarify responsibility—highlighting who is accountable for decisions made by autonomous systems. This is critical in a landscape where human and robot rights may diverge.
  • Ensure that algorithms are both accessible and transparent is essential to building informed trust. Prioritise the explanation of decision-making processes, including how and what data and annotations are used during training.
  • Advance trust requires more than technical fixes—we also need deep reflection, open dialogue, active citizen participation, and robust metrics that weigh both risks and benefits.
  • Develop tools to help measure the social and ethical impact of robotic systems.
  • Create an “Hippocratic oath” for computer scientists and developers, anchoring ethical responsibility in professional practice.

How formal and informal institutions may ensure that future AI professionals are equipped with technical skills as well as with a strong ethical foundation?

  • Expanding presence in underdeveloped areas and promoting the inclusion of minorities and young women through non-formal organizations can foster more inclusive, diverse, and accessible social and technical environments.
  • Education on AI ethics and societal impact must begin in middle and high schools, to build awareness and critical thinking from a young age.
  • There is a pressing need for greater consciousness and understanding of the moral dimensions of AI, including a deeper engagement with the principles and regulatory frameworks of instruments like the AI Act.
  • National education systems and nonprofit organizations should be actively leveraged to deliver both formal instruction and community-based learning initiatives.
  • Practical, hands-on training should be strengthened alongside formal education, ensuring students are equipped with real-world skills and ethical judgment.
  • Personal commitment to computer ethics must be encouraged as a foundational mindset for those entering the field, promoting responsibility, transparency, and public interest as guiding values in technological development.

About Ulysseus

Ulysseus is one of the 64 European Universities selected by the European Commission to become the universities of the future. Led by the University of Seville, the Alliance encompasses other seven universities in Europe: the University of Genoa, Italy; Université Côte d’Azur, France; the Technical University of Košice, Slovakia; MCI | The Entrepreneurial School®, Austria; Haaga-Helia University of Applied Sciences, Finland, the University of Münster, Germany, and the University of Montenegro, Montenegro.