Loading…
Loading…
The process of further training a pre-trained foundation model on a smaller, task-specific dataset to adapt its behavior for a particular use case. Fine-tuning is cheaper than training from scratch and often produces better results for specialized tasks. From a governance perspective, fine-tuning raises questions about data provenance (what data was used?), liability (if the fine-tuned model produces harmful outputs, who is responsible?), and EU AI Act classification (fine-tuning can change a model's risk category).
Why this matters for your team
If your team fine-tunes a model on company or customer data, you take on governance obligations the API-user avoids. Document your training data sources, run bias tests on the fine-tuned model, and check whether the new behavior changes its EU AI Act risk classification.
A legal tech startup fine-tunes an open-source language model on a corpus of case law. Under the EU AI Act, this may constitute providing a GPAI model, triggering documentation obligations.