AI Tool Selection Methodology

The methodology provides information about the tools by highlighting:

  • Tool characteristics
  • Technical criteria
  • Legal and ethical criteria

The tools were evaluated according to legal and ethical criteria using a traffic light principle based on the risks described in the AI Act:

– minimal risk

– limited risk

– high risk

Loader image

  • Critically assess the data protection claims declared by tools; pay attention to and remove sensitive information when submitting prompts to AI tools, e.g., encode information before sending queries to tools that declare the use of LLMs (Large Language Models);
  • Evaluate the potential risk posed by a tool in each specific case, considering the function it performs;
  • Use assessment tools critically—as advisory instruments rather than determinants of the final decision;
  • Declare the use of “limited risk” tools, i.e., indicate that the content was generated using AI, e.g., ChatGPT;
  • Check whether the tool provides an option to disable the use of data for training purposes; it is recommended to enable this option;
  • Critically evaluate generated content for possible “AI hallucinations”;
  • Use professional, paid versions of tools to better protect submitted data from being used for training generative AI models.