Developing domain-specific Meta Attack Languages (MAL) is essential yet labor-intensive in cybersecurity threat modeling, demanding technical expertise to convert unstructured knowledge into formal models. This study presents MAL-LLM, a system that leverages Large Language Models (LLMs) to automate the generation of MAL languages from sources like technical documentation and incident reports. Using a Design Science Research approach, MAL-LLM produces syntactically correct and semantically rich MAL-Languages more efficiently than manual methods. It outperforms a baseline LLM and human-created models in speed and structural accuracy, with minimal errors. Qualitative evaluation via the ExPerT framework shows high recall and domain relevance, though precision varies with source complexity. The system also generates executable MAL-related files for integration into existing toolchains. This work shows that LLMs can reduce development time and improve model quality, though challenges like hallucination control and stylistic consistency remain.