• FPhamB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    It looks weird going from 75B text-davinci-003 to 20B gpt-3.5-turno. But a) we don’t know how they count this - a quantization effectively halves the number of parameters and b) we don’t know anything how they made it.

    except c) they threw much more money at it, using humans to clean the dataset. A clean dataset can make 20B sing. We are using META chaos in llama2 70b with everything thrown at it…