Accelerating social implementation of safe and secure AI by applying Hitachi AI Ethical Principles in AI related research and development under highly reliable operational structure
AI has made great strides in recent years. If used appropriately, it can bring significant benefits to social life, but at the same time, as ethical risks are also ingrained, the risk of adverse impacts brought by AI usage has been recognized for some time. To address the ethical risks in AI, Hitachi, Ltd. (hereinafter Hitachi) quickly began toin formulate our “AI Ethical Principles.” In particular, the Research & Development Group (hereinafter the R&D Group), abiding by the “AI Ethical Principles,” conducts thorough risk assessments, continuous education and awareness activities for all researchers engaged in AI related research and development. In this article, Professor Tatsuhiro from Waseda University Graduate School, a leading expert in AI copyright, Chief Engineer Toshiki SUGAWARA and Chief Researcher Masayoshi MASE from the R&D Group, discuss these initiatives and their future visions.
Topics discussed:
- Three standards of conduct and seven items to be addressed
- The value an AI Ethics Committee composed of members from diverse research centers
- Establishing a framework is just the start – committing to continuous improvement
- “AI Ethical Principles” remain relevant and essential in guiding us in the era of generative AI (OR) Directions in the generative AI era will also be guided by the “AI Ethical Principles” (OR) “AI Ethical Principles” will also guide us in the generative AI era
- A future where AI itself evaluates compliance with ethics
Three standards of conduct and seven items to be addressed

Tatsuhiro: I specialize in intellectual property law centering on copyright law. I also address practical challenges related to the internet and the entertainment business. These days I’m also interested in AI and copyright as well. From this perspective, I feel Hitachi was very quick in that you established the principles on AI ethics already in 2021.

Toshiki: Hitachi formulated the AI Ethical Principles for social innovation business in February 2021. They are intended to develop and implement human-centric AI in society to solve increasingly complex social issues. The principles consist of three standards of conduct and seven practice items to be addressed. Three standards of conduct correspond to each phase of Planning, Social Implementation, and Maintenance. The seven practice items are the following: Safety; Privacy; Fairness, Equality and Prevention of Discrimination; Proper and Responsible Development and Use; Transparency, Explainability and Accountability; Security; and Compliance. (*)
* The AI Ethical Principles / The AI Ethics White Paper

Tatsuhiro: What specific actions do you take to adhere to the AI Ethical Principles in the research and development activities in the R&D Group in Hitachi?
Toshiki: The objective of our AI ethics initiatives in the R&D Group is “to respect treaties, laws and regulations and Hitachi AI Ethical Principles, etc., and promote research and development that contribute to the realization of a safe, comfortable and resilient society in which human dignity is protected and the improvement of people’s quality of life (QoL).”
We have established two pillars for this objective. The first is the Researcher Education and the second is the Research Theme Evaluation. In the first pillar of the researcher education, upon joining the company, researchers learn AI ethics during the group training. This serves as an introduction. To ensure AI ethics takes hold, we organize workplace discussions based on risk prediction training and provide AI researcher education for confirmation.

In the second pillar of the research theme evaluation, we identify risk associated with AI ethics in individual research themes. As the Inlet Evaluation, the AI ethics checklist is used to check whether the research content is relevant to AI. If appliable, risk is assessed before reviews by advisors (workplace representative members), the secretariat, etc. Then, researchers carry out research while checking and paying attention to the risk involved in their actual research themes. If the residual risk is still high, the research department considers countermeasures and share information. Once research results are obtained, as the Exit Rating, the AI ethics checklist is again used to apply for news releases, PoC (Proof of Concept) and external demonstrations. By having two gates at the beginning and the end, we thoroughly assess adherence to AI ethics.
The value of an AI Ethics Committee composed of members from diverse research centers

Tatsuhiro: Then, to put these into practice, specifically what organization structure do you have in the R&D Group?
Toshiki: We have established the R&D Group AI Ethics Committee (hereinafter the AI Ethics Committee). Since the R&D Group has a wide range of research domains, review and feedback from researchers in various fields are needed. Each R&D center submits to the AI Ethics Committee the results of self-assessment using the checklist. The committee conducts a review and provides feedback, considering the research background and workplace conditions.
Masayoshi: The activities of the AI Ethics Committee are led mainly by the workplace representative members selected from each research center. One example is workplace discussions based on KYT risk prediction training that is a part of the researcher education mentioned above. Risk prediction is a training method to discuss whether risks may exist in the workplace to prevent accidents from happening, for example in factories on how to equipment is operated or installed equipment. This training is conducted often as part of occupational health and safety education in Hitachi. We have also applied the method to AI ethics. We utilize it in our training so that researchers can learn risks brought by a lack of AI ethics.


More specifically, workplace representative members select a risk prediction case for discussion on “AI Ethical Principles,” which is then discussed in a group meeting by researchers in the workplace. One case example might be a setting where research is being conducted for a service to utilize plant data stored on the virtual space using generative AI. Researchers discuss where risks may lie in applying AI. Examples responses may be “Rather than allowing anything to be queried through generative AI, we should ensure proper development and appropriate use that clarifies the scope of use and responsibility” or “Data based judgement poses a risk to safety if the data is incorrect. So, human judgement must be incorporated.” This is part of our AI ethics education in the workplace to study AI risk on actual research themes.
Toshiki: Cases used in risk prediction training are prepared by workplace representative members. They are presented as fictional. However, cases are realistic because they are created from real-life examples. Researchers are encouraged not just to identify issues affecting them but also proactively examine and even judge what could present a risk once research results are applied to commercial services and products. In the area of AI ethics, developing principles per se is not easy. But what is even more difficult is to put them into practice in a large organization. Through these training activities, I hope that researchers can develop the habit of considering AI ethics little by little.
Tatsuhiro: What is your opinion about the relation between compliance and ethics?

Masayoshi: We talk about ethics and the legal system during the annual ethics education using examples. In it, we explain that there are “human ethics” and “AI ethics”. In the case of human ethics, when there is a behavior or representation in response to an event, human ethics by themselves judge whether it is right ethically or socially. If it’s wrong, feedback works so that humans can learn from it and make a correct judgement.

Meanwhile, how about AI? If nothing is done, the risk is that the wrong behavior or representation could be left as it is. AI is implemented in society after a concept of a system fit for purpose is created, designed, deployed and verified. The use of this AI by society, in other words, the operation, corresponds to human behavior or representation. When an unethical event happens, feedback must be provided somewhere. While humans can reflect on it, AI involves many processes from concept planning to social implementation and even up to maintenance. To identify which process is wrong, feedback needs to be given to the entire processes of building the AI system. AI ethics encompass such a very broad range that they also include compliance with laws and regulations, and rules.

We want to ensure AI ethics in the R&D Group by thoroughly instilling and regularly communicating this way of thinking. Even when Japan introduces legislation like EU AI Act (the world’s first comprehensive AI regulation), we believe we can flexibly respond by modifications based on our stance towards AI ethics.
Establishing a framework is just the start – Committing to continuous improvement
Tatsuhiro: In the R&D Group, what achievements have you made since you started AI ethics activities?
Toshiki: During the planning phase, researchers assess whether their scope falls under AI research. If it does, they judge whether there are any issues. If there are, they submit the checklist. Obviously, AI related research is increasing rapidly. Recently the annual total has exceeded 500 submissions and is increasing year on year. The checklists are examined thoroughly by workplace representative committee members. We request review and feedback especially on research that seems to have high residual risks.

Workplace representative committee members have a difficult role to play. Nominated by each research center, they are engaged in this activity with a strong sense of professionalism. Bringing researchers from different fields together leads to identifying risks not found from the perspective of the same research discipline. As we continuously consider how to make checks easier and assessments more accurate, we receive improvement ideas on the checklist almost every year. They are for instance to create a guide of references to AI ethics discussion results and to visualize the AI ethics checklist. We are incorporating these proposals to improve our initiatives.
Tatsuhiro: Generally speaking, researchers want to devote themselves to research. Some may find it bothersome to follow the AI Ethical Principles.

Masayoshi: It’s not deniable that there are no such researchers. However, using the checklist, researchers can make an assessment by themselves. Then, if they are worried or feel that there may be issues, they can consult with workplace committee representative members and receive advice, which gives peace of mind to the research and development activity itself. I rather think that it helps accelerate their research and development work (for those who feel it tedious).
Toshiki: This is a kind of risk communication. I believe it is important for everyone from researchers to workplace representative committee members and even to the business divisions to share the risk status by checklist submission, not letting one researcher go ahead with research after unreasonably judging that the risk of AI ethics should be low.
Tatsuhiro: I understand very well that you use the checklist for AI ethics and consult the AI Ethics Committee if necessary and that researchers take it for granted. At the beginning, you mentioned education, where you hold an annual program for researchers. What education do you provide?
Toshiki: Once a year, Masayoshi-san and I give a lecture, where we talk about cutting-edge topics related to AI ethics to the internal audience. Furthermore, the AI Ethics Committee invites external speakers. In the past, we had the honor to invite Professor Ueno as a guest speaker to talk about copyright topics. Technology advances and legal systems change so fast. Therefore, our plan is to invite external speakers to give a lecture once a year at least so that we can provide updated information.

Tatsuhiro: These are impressive initiatives indeed. However, do you think that discussing AI ethics could apply a brake to research and development?
Masayoshi: That depends on how you think about it. Let’s assume that during the research and development phase, no discussion is held on AI ethics. However, once the research results are translated into specific products or services, other divisions like business and intellectual property are always or are highly likely to come back to us with a barrage of ethical findings. After all, Hitachi’s AI Ethical Principles are companywide principles. Therefore, conducting checks during the initial R&D phase, you will find it overwhelmingly efficient compared to reviews just before commercialization.
Through social innovation business, Hitachi is engaged in many critical societal infrastructure projects. This sector is extremely cautious in adopting AI. In general, we have started to see AI technologies like image recognition and deep learning integrated into products and services, but not yet in critical infrastructure. It is because expert judgement is required for the fundamental part of where AI can be used. At the same time, because of this reason, even from this stage, it is essential to have a process in place to ensure that every single researcher considers AI ethics. In this sense, we may describe our AI Ethical Principles as offensive (not defensive).
AI Ethics Principles remain relevant and essential in guiding us in the era of generative AI
Tatsuhiro: Since around 2020, AI image generators, Chat GPT and other text generators have rapidly advanced and become commonplace. At the same time, there is a growing sense of crisis voiced mainly by creators. When you publish a press release that things like this will be made possible by AI, this often triggers backlash from them saying, “We will lose our job.” In the initiatives by the AI Ethics Committee, can you deal with the relation between generative AI and AI ethics?
Toshiki: The AI Ethical Principles are principles, which cover a broad and universal concept. Since generative AI is also included in it, as long as you adhere to them, you should be able to manage mostly. What is important is to communicate internally and externally that Hitachi has initiatives to strictly observe the AI Ethical Principles, that the operational structure to implement them is highly reliable, and that the content is always updated.

Of course, separate handling is also required for rapidly growing generative AI. In this case, the Generative AI Center established in 2023 comes into play. As the CoE (Center of Excellence) organization, the center specializes in generative AI.
Masayoshi: In reality, the boundaries between generative AI and non-generative AI are not clear surprisingly. AI in autonomous driving is normally thought to be non-generative AI. However, it can be also viewed as AI generating certain signals. Defining only AI creating content as generative AI is questionable. The differences may not be as large. If that is the case, be it AI or generative AI, ethical principles should be similar. Thus, I think, without doubt that the AI Ethical Principles can cover the large part.
A future where AI itself evaluates compliance with ethics
Tatsuhiro: Listening to you, I’m impressed that you as a company started working on AI ethics early on, that you put safety over efficiency and business and that you not only formulated the AI Ethical Principles but also continue to manage the operational structure to ensure that they are embedded in the organization. Are these initiatives also conducted by other companies?
Masayoshi: I believe that many of the practice items to be addressed under the AI Ethical Principles themselves are common among other companies. Meanwhile, the difference from many IT players is that Hitachi has social innovation business at the core. In such infrastructure as energy and mobility, focus is placed on OT (operational technology) as well as IT. Ensuring safety and compliance with laws and regulations is strongly expected. Having a long product cycle is another characteristic. It may be distinctively unique to Hitachi that we adopt AI in such social infrastructure and keep it updated and that we are considering starting innovations.
Tatsuhiro: Generative AI requires verification on whether training data infringes laws and regulations such as copyright and whether output generated violates it. Can you address this by having the monitoring system under the AI Ethical Principles?
Toshiki: Once we can explain safety of training data itself and transparency of training, this may serve as a differentiator for Hitachi’s AI. In this area, our approaches include Explainable AI (XAI), where we can explain the output of AI. We believe that XAI can contribute to fairness and transparency under the AI Ethical Principles.
Masayoshi: With XAI, humans can understand the behavior of AI models. In fact, during social infrastructure projects, if we can’t explain the rationale behind the AI decision in detail, AI is not adopted. This has been a stumbling block that we faced over and over. Similarly, also for copyright concerns, we believe that XAI could promote understanding as explanations are made possible.
Tatsuhiro: Once you can explain AI behavior by means of XAI, a world might emerge in the future where AI itself assesses adherence to AI ethics.

Masayoshi: By using AI, humans might be able to focus on higher level of judgement after AI ethical compliance is ensured. For instance, AI usage in social infrastructure has challenges different from automatic generation of images and texts. When AI judgements are different from expected recommendations, how would you fix them? How would you control massive language models? I regard these as important research considerations.
Tatsuhiro: EU AI Act will fully take effect from 2026. In principle, it is applicable within the EU. However, companies outside the EU could be required to comply with it if their services are provided to customers in the region. Are the AI Ethical Principles sufficient to follow global trends like this?
Toshiki: As the market is already open to the world, we can’t ignore the EU’s move. In Hitachi, keeping an eye on trends in the EU from early on, we have considered our responses. The AI Ethical Principles were formulated to be universal based on developments in the EU. Therefore, we believe that we can comply to some extent. Besides, once issues become more apparent, the Generative AI Center, for instance, can take the lead in creating a specific organization or a system to continue to address them. We can say that this itself also characterizes Hitachi’s approach to AI ethics.

Profiles
(As at the time of publication)

UENO Tatsuhiro
Professor, Faculty of Law, Waseda University
Tatsuhiro Ueno is considered a leading specialist in copyright law, and is currently a professor of Intellectual Property Law and a co-director of the Research Center for the Legal System of Intellectual Property (RCLIP) at Waseda University. He has also been a visiting scholar at the Max Planck Institute for Intellectual Property and Competition Law and the Ludwig Maximilian University of Munich. Ueno serves on many boards and committees including that of the Law and Computers Association of Japan, and in the Study Group on Intellectual Property Rights in the AI Era under the jurisdiction of the Cabinet Office of Japan’s Intellectual Property Strategy Headquarters Committee.

SUGAWARA Toshiki
Chief Engineer, Academia & Government Relations Department,
Technology Management Center,
Research and Development Group, Hitachi Ltd.
Sugawara has been serving as the Secretariat for the R&D Group AI Ethics Committee since its launch in 2021. His areas of responsibility include negotiations and compliance (such as bioethics and research integrity).

MASE Masayoshi
Chief Researcher, Media Intelligent Processing Research Department,
Advanced Artificial Intelligence Innovation Center,
Research and Development Group, Hitachi, Ltd.
As a visiting researcher at Stanford University, Mase participated in joint research on XAI from 2019 to 2021. He has been engaged in research and development that underpins AI trust and governance.
Related links
AI Governance and Ethics in Social Innovation Business : SPECIAL ISSUE 2022 : Hitachi Review