NewsAI Translated Content

AI Bootcamp: LLM Finetuning & Deployment

3 min read
AI Bootcamp: LLM Finetuning & Deployment

On Friday, July 4, 2025, Float16 together with the Typhoon SCB 10X team organized AI Bootcamp: LLM Finetuning & Deployment at DistrictX, FYI Building, which was a significant step in promoting AI technology development in Thailand. The event received overwhelming interest and was a success beyond expectations.

Event Overview

AI Bootcamp was a full-day hands-on training event designed to take participants from basic understanding to practical use of Large Language Models (LLMs) through the finetuning and deployment process using tools and GPUs.

Highlights

Morning Session: How to Finetune Typhoon Open-Source LLMs

Speaker: Surapon Nonesung, Research Scientist at SCB10x

Key Takeaway from Typhoon Team: 5 Tips for Fine Tuning to Get an Effective Model

  1. Spend over 80% of time on data preparation (because data quality is the foundation of a good model)
  2. Create at least 2 evaluation datasets, with at least 1 dataset containing data that has never been trained before
  3. During Fine-Tuning, use train set and eval set to check for overfitting
  4. Evaluate the model before and after Fine-Tuning to see if the model actually improves
  5. Check and adjust the Chat Template used, such as system prompt, instruction format, etc. A good template helps the model answer more accurately and perform better

Afternoon Session: How to Deploy Your Finetuned Typhoon Model on Float16 ServerlessGPU

Speaker: Matichon Maneegard, Founder at Float16

Key Takeaway from Float16 Team: 3 Techniques to Improve LLMs for Real Software Development Use

  1. Choose the right LLM file formats for your purpose:
    1. .safetensors for HuggingFace, with separate files for model-weight, tokenizer, architecture
    2. .gguf for llama-cpp, Ollama, LM-studio - easy to use
  2. Choose the right format for the job:
    1. safetensors for fine-tuning
    2. gguf for inference
    3. Support OpenAI API Compatibility
    4. Make existing code work without rewriting
    5. Change endpoint from OpenAI to use our own model
    6. Save costs and have full control
  3. Structured Output (Grammar) improves answers
    1. Use xgrammar, outlines, guidance to control answer format
    2. JSON mode for accurate function calling
    3. Define grammar rules for SQL, option selection, or specific formats

Excellent Feedback

From post-event evaluations, we received feedback beyond expectations:

Some Impressive Comments

"Impressed with the overall event. It was a great learning experience from people who actually do the work" - Participant

"The SCB10X and Float16 teams were very attentive in providing knowledge. Management was excellent. Khun K Matichon from Float16 structured the Lecture & Workshop very well" - Participant

"I really enjoyed the event. Especially how smooth and well-streamlined the event was." - Participant


What Participants Received

Practical Knowledge

  • Hands-on experience in fine-tuning and deploying LLMs
  • Using Typhoon open-source models
  • Techniques to improve LLMs for Software Development work

Benefits

  • 100% Free - no charge
  • GPU Credits from Float16 worth 1,000 baht
  • Digital Certificate
  • Free Lunch

Network

Met and exchanged experiences with:

  • Professors, students, researchers, and Data Scientists
  • Engineers and Developers
  • Startup founders and entrepreneurs

Thank you to everyone who attended, including:

  • SCB 10X and NVIDIA teams for their full support
  • DistrictX for the venue
  • All participants for feedback and great ideas

Float16 is committed to being part of driving Thailand to become an AI leader in the region. This event is just the beginning of building a strong and sustainable AI practitioners community.

#AIBootcamp #LLMFinetuning #Float16 #SCB10X #NVIDIA #MachineLearning #AIThailand