Mixtral is part of the family of open-weight large language models Mistral released, that utilizes the sparse Mixture of Experts architecture. The family of models are available both in raw pretrained and fine-tuned forms in 7B and 8x7B parameter sizes. Its sizes, open-weight nature, performance in benchmarks and context length of 32,000 tokens make it a compelling option for self-hosted LLMs. Note that these open-weight models are not tuned for safety out of the box, and users need to refine moderation based on their own use cases. We have experience with this family of models in developing Aalap, a fine-tuned Mistral 7B model trained on data related to specific Indian legal tasks, which has performed reasonably well on an affordable cost basis.