The product has a component that relies on a generative AI/ML model configured with inference parameters that produce an unacceptably high rate of erroneous or unexpected outputs.
Generative AI/ML models, such as those used for text generation, image synthesis, and other creative tasks, rely on inference parameters that control model behavior, such as temperature, Top P, and Top K. These parameters affect the model's internal decision-making processes, learning rate, and probability distributions. Incorrect settings can lead to unusual behavior such as text "hallucinations," unrealistic images, or failure to converge during training. The impact of such misconfigurations can compromise the integrity of the application. If the results are used in security-critical operations or decisions, then this could violate the intended security policy, i.e., introduce a vulnerability.
Impact: Varies by ContextUnexpected State
The product can generate inaccurate, misleading, or nonsensical information.
Impact: Alter Execution LogicUnexpected StateVaries by Context
If outputs are used in critical decision-making processes, errors could be propagated to other systems or components.
{
json{
json