• PookaMacPhellimenOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    https://github.com/QwenLM/Qwen

    Also released was a 1.8B model.

    From Bunyan Hui’s Twitter announcement:

    “We are proud to present our sincere open-source works: Qwen-72B and Qwen-1.8B! Including Base, Chat and Quantized versions!

    🌟 Qwen-72B has been trained on high-quality data consisting of 3T tokens, boasting a larger parameter scale and more training data to achieve a comprehensive performance upgrade. Additionally, we have expanded the context window length to 32K and enhanced the system prompt capability, allowing users to customize their own AI assistant with just a single prompt.

    🎁 Qwen-1.8B is our additional gift to the research community, striking a balance between maintaining essential functionalities and maximizing efficiency, generating 2K-length text content with just 3GB of GPU memory.

    We are committed to continuing our dedication to the open-source community and thank you all for your enjoyment and support! 🚀 Finally, Happy 1st birthday ChatGPT. 🎂 “

    • candre23B
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      we have expanded the context window length to 32K

      Kinda buried the lead here. This is far and away the biggest feature of this model. Here’s hoping it’s actually decent as well!