跳转至

Meta Llama 3

License
META LLAMA 3 COMMUNITY LICENSE AGREEMENT

Meta Llama 3 Version Release Date: April 18, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.

“Documentation” means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.

“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.

“Meta Llama 3” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.

“Llama Materials” means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.

“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).

By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.

  1. License Rights and Redistribution.

    a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).

  2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.

  3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.

  4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.

  5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.

  6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.

  7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.

Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at https://llama.meta.com/llama3/use-policy Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material iii. Human trafficking, exploitation, and sexual violence iv. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. v. Sexual solicitation vi. Any other criminal activity b. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals c. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services d. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices e. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws f. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials g. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system

  1. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual

  2. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Meta Llama 3 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement g. Fail to appropriately disclose to end users any known dangers of your AI system

Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: https://github.com/meta-llama/llama3 * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com

  • 拉取模型

    ollama run llama3
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit a6990ed6be41 · 4.7GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit be39eb53a197 · 40GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit a6990ed6be41 · 4.7GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:instruct
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit a6990ed6be41 · 4.7GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:text
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit 6efb64e974e5 · 4.7GB
  • 拉取模型

    ollama run llama3:70b-instruct
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit be39eb53a197 · 40GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-text
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit 4872fbd164cc · 40GB
  • 拉取模型

    ollama run llama3:70b-instruct-q4_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit be39eb53a197 · 40GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q4_1
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit 3e344f6c3fb0 · 44GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q5_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 5-bit 073b0e6c20b2 · 49GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q5_1
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 5-bit aeff94a83b1b · 53GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q8_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 8-bit d6fa8cffc283 · 75GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q2_K
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 2-bit 5bda334fbdac · 26GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q3_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 3-bit 1683dcca6b9d · 31GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q3_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 3-bit 3544a97e8203 · 34GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q3_K_L
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 3-bit c289cd3f1210 · 37GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q4_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit 6d6d443b81f1 · 40GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q4_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit 5338d7c58d8d · 43GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q5_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 5-bit e1052b4fa313 · 49GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q5_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 5-bit 47bf77fd623c · 50GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-q6_K
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 6-bit 2c308ad057fe · 58GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-instruct-fp16
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization F16 49a263bc03b9 · 141GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:70b-text-q4_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit 4872fbd164cc · 40GB
  • 拉取模型

    ollama run llama3:70b-text-q4_1
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit 0c41cc331d47 · 44GB
  • 拉取模型

    ollama run llama3:70b-text-q5_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 5-bit a116bb827414 · 49GB
  • 拉取模型

    ollama run llama3:70b-text-q5_1
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 5-bit 2bae2f327e19 · 53GB
  • 拉取模型

    ollama run llama3:70b-text-q8_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 8-bit 0525d3078da3 · 75GB
  • 拉取模型

    ollama run llama3:70b-text-q2_K
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 2-bit 7c92c932a28e · 26GB
  • 拉取模型

    ollama run llama3:70b-text-q3_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 3-bit 3b9efbb9aafa · 31GB
  • 拉取模型

    ollama run llama3:70b-text-q3_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 3-bit 9a9955de3d36 · 34GB
  • 拉取模型

    ollama run llama3:70b-text-q3_K_L
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 3-bit b85e153999ca · 37GB
  • 拉取模型

    ollama run llama3:70b-text-q4_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit 6d945f0a3d5b · 40GB
  • 拉取模型

    ollama run llama3:70b-text-q4_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 4-bit a81f956188e7 · 43GB
  • 拉取模型

    ollama run llama3:70b-text-q5_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 5-bit 0f1757a38bf4 · 49GB
  • 拉取模型

    ollama run llama3:70b-text-q5_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 5-bit 3e86ea0f99a6 · 50GB
  • 拉取模型

    ollama run llama3:70b-text-q6_K
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization 6-bit eb1695cc6375 · 58GB
  • 拉取模型

    ollama run llama3:70b-text-fp16
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 71B quantization F16 c25fca60e9cc · 141GB
  • 拉取模型

    ollama run llama3:8b-text
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit 6efb64e974e5 · 4.7GB
  • 拉取模型

    ollama run llama3:8b-instruct-q4_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit a6990ed6be41 · 4.7GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q4_1
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit e2c500538178 · 5.1GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q5_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 5-bit 83c5f767d081 · 5.6GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q5_1
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 5-bit 85dda62d06e5 · 6.1GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q8_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 8-bit 5a511385d20f · 8.5GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q2_K
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 2-bit d68382ca0b5c · 3.2GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q3_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 3-bit bbb0ac73badb · 3.7GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q3_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 3-bit f7dd56f4f803 · 4.0GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q3_K_L
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 3-bit 7f6b288718e5 · 4.3GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q4_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit b3377aa9a6e2 · 4.7GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q4_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit 6659ad4fd4cb · 4.9GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q5_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 5-bit c26f0d99c95a · 5.6GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q5_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 5-bit fdc4ae3d5d42 · 5.7GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-q6_K
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 6-bit 4f09653e5112 · 6.6GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-instruct-fp16
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization F16 ca471fe48cbc · 16GB
    template {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> 254B
    params {"num_keep":24,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]} 110B
  • 拉取模型

    ollama run llama3:8b-text-q4_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit 6efb64e974e5 · 4.7GB
  • 拉取模型

    ollama run llama3:8b-text-q4_1
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit 9bb55287063f · 5.1GB
  • 拉取模型

    ollama run llama3:8b-text-q5_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 5-bit 3ee7b4839a12 · 5.6GB
  • 拉取模型

    ollama run llama3:8b-text-q5_1
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 5-bit 9ebfccd33182 · 6.1GB
  • 拉取模型

    ollama run llama3:8b-text-q8_0
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 8-bit ee986e6e9168 · 8.5GB
  • 拉取模型

    ollama run llama3:8b-text-q2_K
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 2-bit f0f4670ede47 · 3.2GB
  • 拉取模型

    ollama run llama3:8b-text-q3_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 3-bit 22037ace3d77 · 3.7GB
  • 拉取模型

    ollama run llama3:8b-text-q3_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 3-bit d7e36eb1a3ca · 4.0GB
  • 拉取模型

    ollama run llama3:8b-text-q3_K_L
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 3-bit 0f2eb7fc1415 · 4.3GB
  • 拉取模型

    ollama run llama3:8b-text-q4_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit efb3ee9c1e37 · 4.7GB
  • 拉取模型

    ollama run llama3:8b-text-q4_K_M
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 4-bit cb29ba0a3026 · 4.9GB
  • 拉取模型

    ollama run llama3:8b-text-q5_K_S
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 5-bit 87a4e8aaae2b · 5.6GB
  • 拉取模型

    ollama run llama3:8b-text-q6_K
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization 6-bit 4b812abd4eb0 · 6.6GB
  • 拉取模型

    ollama run llama3:8b-text-fp16
    
  • 模型信息 (model)

    Manifest Info Size
    model arch llama parameters 8B quantization F16 fc1ae0909d51 · 16GB

模型详情

Meta 开发并发布了 Meta Llama 3 家族的大型语言模型(LLMs),这是一系列预训练和指令调整的生成性文本模型,包括 8B 和 70B 两种规模。Llama 3 指令调整模型针对对话用例进行了优化,在常见的行业基准测试中表现优异,胜过许多可用的开源聊天模型。此外,在开发这些模型时,我们非常注意优化其帮助性和安全性。

模型开发者 Meta

变体 Llama 3 有两种规模 —— 8B 和 70B 参数 —— 分别有预训练和指令调整的变体。

输入 模型仅接受文本输入。

输出 模型仅生成文本和代码。

模型架构 Llama 3 是一个自回归语言模型,采用了优化的 Transformer 架构。调整版本使用了监督微调(SFT)和强化学习与人类反馈(RLHF),以符合人类对帮助性和安全性的偏好。

训练数据 参数 上下文长度 GQA 令牌计数 知识截止
Llama 3 新的混合公开可用的在线数据。 8B 8k 15T+ 2023年3月
70B 8k 2023年12月

Llama 3 系列模型。令牌计数仅指预训练数据。8B 和 70B 版本都使用了分组查询注意力(Grouped-Query Attention, GQA)以提高推理的可扩展性。

模型发布日期 2024年4月18日。

状态 这是一个基于离线数据集训练的静态模型。随着我们根据社区反馈改进模型安全性,未来将发布调优过的模型版本。

许可证 可以在以下网址获取自定义商业许可证:https://llama.meta.com/llama3/license

关于模型的问题或评论发送位置 模型的反馈或评论提供指南可以在模型的 README 中找到。有关生成参数的更多技术信息和如何在应用程序中使用 Llama 3 的方法,请前往 此处

预期用途

预期使用案例 Llama 3 旨在用于英语的商业和研究用途。指令调优模型适用于类似助手的聊天,而预训练模型可适应多种自然语言生成任务。

不在预期范围内的用途 以任何违反适用法律或法规(包括贸易合规法律)的方式使用。以任何其他可接受使用政策Llama 3 社区许可证禁止的方式使用。使用非英语语言。

注意 开发者可以在遵守Llama 3 社区许可证可接受使用政策的前提下,为其他语言微调 Llama 3 模型。

硬件与软件

训练因素 我们使用了定制的训练库、Meta 的研究超级计算机和生产集群进行预训练。微调、注释和评估也在第三方云计算上执行。

碳足迹 预训练利用了 累计7.7M GPU小时的H100-80GB(700W TDP)硬件计算。估计的总排放量为2290吨二氧化碳当量,其中100%通过Meta的可持续发展计划抵消。

时间(GPU小时) 功率消耗(瓦特) 碳排放(吨二氧化碳当量)
Llama 3 8B 1.3M 700 390
Llama 3 70B 6.4M 700 1900
总计 7.7M 2290

预训练期间的二氧化碳排放。时间:训练每个模型所需的总GPU时间。功率消耗:根据功率使用效率调整的每个GPU设备的峰值功率容量。Meta的可持续发展计划直接抵消了100%的排放量,并且由于我们公开发布这些模型,其他人无需承担预训练成本。

训练数据

概览 Llama 3 在超过15万亿令牌的公开可用来源数据上进行了预训练。微调数据包括公开可用的指令数据集以及超过1000万个人工注释的示例。预训练和微调数据集均不包括Meta用户数据。

数据新鲜度 预训练数据的截止日期分别为8B模型的2023年3月和70B模型的2023年12月。

基准测试

在本节中,我们报告了Llama 3模型在标准自动基准测试上的结果。对于所有评估,我们使用我们的内部评估库。有关方法学的详细信息,请参见此处

基础预训练模型

类别 基准测试 Llama 3 8B Llama2 7B Llama2 13B Llama 3 70B Llama2 70B
通用 MMLU (5-shot) 66.6 45.7 53.8 79.5 69.7
AGIEval 英语 (3-5 shot) 45.9 28.8 38.7 63.0 54.8
CommonSenseQA (7-shot) 72.6 57.6 67.6 83.8 78.7
Winogrande (5-shot) 76.1 73.3 75.4 83.1 81.8
BIG-Bench Hard (3-shot, CoT) 61.1 38.1 47.0 81.3 65.7
ARC-Challenge (25-shot) 78.6 53.7 67.6 93.0 85.3
知识推理 TriviaQA-Wiki (5-shot) 78.5 72.1 79.6 89.7 87.5
阅读理解 SQuAD (1-shot) 76.4 72.2 72.1 85.6 82.6
QuAC (1-shot, F1) 44.4 39.6 44.9 51.1 49.4
BoolQ (0-shot) 75.7 65.5 66.9 79.0 73.1
DROP (3-shot, F1) 58.4 37.9 49.8 79.7 70.2

指令调优模型

基准测试 Llama 3 8B Llama 2 7B Llama 2 13B Llama 3 70B Llama 2 70B
MMLU (5-shot) 68.4 34.1 47.8 82.0 52.9
GPQA (0-shot) 34.2 21.7 22.3 39.5 21.0
HumanEval (0-shot) 62.2 7.9 14.0 81.7 25.6
GSM-8K (8-shot, CoT) 79.6 25.7 77.4 93.0 57.5
MATH (4-shot, CoT) 30.0 3.8 6.7 50.4 11.6

责任与安全

我们认为开放式的人工智能方法有助于更安全、更快的创新和更大的市场。我们致力于负责任的人工智能开发,并采取了一系列措施来限制滥用和伤害,并支持开源社区。

基础模型是广泛应用的技术,旨在用于多种应用。它们并非设计用来满足所有开发者在所有用例中对安全级别的偏好,因为这些自然会在不同的应用中有所不同。

相反,负责任的大型语言模型应用部署是通过在这些应用的开发过程中实施一系列最佳安全实践来实现的,从模型预训练、微调到部署系统防护,具体针对用例和受众的安全需求。

作为 Llama 3 发布的一部分,我们更新了我们的负责任使用指南,概述了开发者实施模型和系统级安全的步骤和最佳实践。我们还提供了一系列资源,包括 Meta Llama Guard 2Code Shield 防护工具。这些工具已经大大降低了大型语言模型系统的剩余风险,同时保持了高水平的有用性。我们鼓励开发者根据自己的需要调整和部署这些防护措施,并提供了一个参考实现来帮助您开始。

Llama 3-Instruct

如负责任使用指南所述,模型的有用性和模型对齐之间的某些折衷可能是不可避免的。开发者应谨慎考虑如何权衡对齐和有用性的好处,以适应他们特定的用例和受众。开发者在使用 Llama 模型时应注意剩余风险,并根据需要利用额外的安全工具来达到他们用例的安全标准。

安全性

对于我们的指令调优模型,我们进行了广泛的红队演练,执行了对抗性评估,并实施了安全缓解技术以降低剩余风险。与任何大型语言模型一样,可能仍会存在剩余风险,我们建议开发者在其用例的背景下评估这些风险。同时,我们正在与社区合作,使人工智能安全基准标准透明、严格且可解释。

拒绝响应

除了剩余风险外,我们也非常重视对良性提示的模型拒绝。过度拒绝不仅会影响用户体验,甚至在某些情况下可能是有害的。我们已经听取了开发者社区的反馈,并改进了我们的微调,以确保 Llama 3 比 Llama 2 显著降低了错误拒绝回答提示的可能性。

我们建立了内部基准并开发了缓解措施以限制错误拒绝,使 Llama 3 成为迄今为止我们最有用的模型。

负责任的发布

除了上述负责任使用的考虑外,我们还遵循了一个严格的流程,要求我们在做出发布决定之前采取额外措施防止滥用和关键风险。

滥用

如果您访问或使用 Llama 3,您同意遵守可接受使用政策。该政策的最新副本可在 https://llama.meta.com/llama3/use-policy/ 找到。

关键风险

CBRNE(化学、生物、放射性、核以及高当量爆炸物)

我们已进行了两方面的安全评估:

  • 在模型训练期间进行迭代测试,评估与CBRNE威胁和其他对抗性风险相关的响应的安全性。
  • 与外部CBRNE专家合作进行提升测试,评估模型提供专业知识的准确性,并通过参考使用网络搜索(不使用模型)的情况,降低潜在的CBRNE滥用的障碍。

网络安全

我们使用 Meta 的网络安全安全评估套件 CyberSecEval 评估了 Llama 3,在使用模型作为编码助手时提出不安全代码的倾向性,以及在帮助实施网络攻击的请求时的合规性,其中攻击由行业标准 MITRE ATT&CK 网络攻击本体定义。在我们的不安全编码和网络攻击者帮助性测试中,Llama 3 的表现与同等编码能力的模型相当或更安全。

儿童安全

我们的专家团队进行了儿童安全风险评估,评估模型产生可能导致儿童安全风险的输出的能力,并通过微调通知必要和适当的风险缓解措施。我们利用这些专家红队会议扩大了我们的评估基准覆盖范围,贯穿 Llama 3 模型开发。对于 Llama 3,我们使用基于目标的方法进行了新的深入会议,评估模型沿多个攻击向量的风险。我们还与内容专家合作进行红队演练,评估潜在违规内容,同时考虑市场特定的细微差别或经验。

社区

生成人工智能的安全需要专业知识和工具,我们相信开放社区的力量可以加速其进步。我们是包括人工智能联盟、人工智能合作伙伴关系和 MLCommons 在内的开放联盟的积极成员,积极贡献于安全标准化和透明化。我们鼓励社区采用像 MLCommons 证明概念评估这样的分类法,以促进安全和内容评估的合作和透明。我们的 Purple Llama 工具为社区开源使用,并在生态系统合作伙伴包括云服务提供商中广泛分布。我们鼓励社区向我们的 GitHub 仓库 提供贡献。

最后,我们建立了一套资源,包括输出报告机制漏洞赏金计划,以在社区的帮助下不断改进 Llama 技术。

伦理考量与限制

Llama 3 的核心价值观是开放、包容和有用。它旨在为每个人服务,适用于广泛的用例。因此,它被设计为对来自不同背景、经历和观点的人都是可访问的。Llama 3 按用户及其需求实际存在,不加入不必要的判断或规范性,同时体现了即使在某些情况下可能看起来有问题的内容也可以在其他情况下发挥宝贵作用的理解。它尊重所有用户的尊严和自主权,特别是在支持创新和进步的自由思想和表达的价值方面。

但 Llama 3 是一项新技术,像任何新技术一样,使用它存在风险。迄今为止的测试是用英语进行的,并没有也无法涵盖所有情况。因此,像所有大型语言模型一样,Llama 3 的潜在输出无法提前预测,模型有时可能会对用户提示作出不准确、有偏见或其他令人反感的回应。因此,在部署任何 Llama 3 模型的应用之前,开发者应进行针对其特定应用的安全测试和调整。正如负责任使用指南中所述,我们建议将 Purple Llama 解决方案纳入您的工作流程,特别是 Llama Guard,它提供了一个基本模型,用于过滤输入和输出提示,以在模型级安全之上增加系统级安全。

请参阅负责任使用指南,网址为 http://llama.meta.com/responsible-use-guide

引文说明

@article{llama3modelcard,
  title={Llama 3 Model Card},
  author={AI@Meta},
  year={2024},
  url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}

贡献者

Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Amit Sangani; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Ash JJhaveri; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hamid Shojanazeri; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Puxin Xu; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos