Section 4.6 talks about the risk and what going to happen with opensource models and fine tuning.

4.6.  Soliciting Input on Dual-Use Foundation Models with Widely Available Model Weights.  When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model.  To address the risks and potential benefits of dual-use foundation models with widely available weights, within 270 days of the date of this order, the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, shall:

(a)  solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on potential risks, benefits, other implications, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available, including:

(i)    risks associated with actors fine-tuning dual-use foundation models for which the model weights are widely available or removing those models’ safeguards;

(ii)   benefits to AI innovation and research, including research into AI safety and risk management, of dual-use foundation models for which the model weights are widely available; and

(iii)  potential voluntary, regulatory, and international mechanisms to manage the risks and maximize the benefits of dual-use foundation models for which the model weights are widely available; and

(b)  based on input from the process described in subsection 4.6(a) of this section, and in consultation with the heads of other relevant agencies as the Secretary of Commerce deems appropriate, submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models.

https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

so whats the definition of “dual-use foundation model” it’s as follows:

(k)  The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

(i)    substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;

(ii)   enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or

(iii)  permitting the evasion of human control or oversight through means of deception or obfuscation.

Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities. 

  • SomeOddCodeGuyB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    This part right here, or at least my anticipation that something like it would come out, is why I’ve been hesitant to join in on fine-tuning models. My wife has had an interest in doing so for a while, and we have the knowledge and resources to at least slightly contribute, but I’m honestly afraid of exactly what this is implying- that they haven’t determined what category fine-tuning falls into, and will decide that at a later date.

    My fear is that they will come up with something really outlandish, like personal liability for “model developers”, which would include fine tuners, for how their models are used. That would be absolutely horrific.

    270 days is a long way away, so for now I recommend everyone keep an eye out for any of those “call in and let us know what you think!” kind of things (and do so! Let them know what you think!), but otherwise all we can do is keep going business as usual.

    As that date approaches, though… it might be worth grabbing a few of your favorite models juuuuuust in case they decide something really stupid.