Main Concept

Continued Pre-training consists in training a pre-trained model on new domain-specific data before fine-tuning it. It expands the model’s knowledge in a specific domain. It’s like if you give to a boy who already finished primary school a book about Birds of North America so he can expand his knowledge in that specific subject.

Context

Why is it important? How does it relate to other topics?

Key Aspects

  • Difference from fine-tuning: Pre-training expands knowledge; fine-tuning specializes for tasks.
  • Not currently supported by Amazon Bedrock (as of 2026). Research topic for deeper exploration later.

Applications

Where could it be applied?

Examples

Real-world examples, Evidence



Fine-Tuning vs Continued Pretraining | by Hey Amit