Update model card (#25)
- Update model card (8124ca2a858fd1c8ceea2fc56f2ff3f9e76a30c3) Co-authored-by: Pedro Cuenca <pcuenq@users.noreply.huggingface.co>
This commit is contained in:
parent
e9f8effbab
commit
9213176726
121
README.md
121
README.md
@ -221,7 +221,7 @@ extra_gated_button_content: Submit
|
|||||||
|
|
||||||
## Model Information
|
## Model Information
|
||||||
|
|
||||||
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
|
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
|
||||||
|
|
||||||
**Model Developer:** Meta
|
**Model Developer:** Meta
|
||||||
|
|
||||||
@ -231,6 +231,8 @@ The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a
|
|||||||
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
|
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
|
||||||
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
|
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
|
||||||
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
|
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
|
||||||
|
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
|
||||||
|
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
|
||||||
|
|
||||||
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
|
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
|
||||||
|
|
||||||
@ -242,17 +244,17 @@ The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a
|
|||||||
|
|
||||||
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
|
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
|
||||||
|
|
||||||
**Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
||||||
|
|
||||||
## Intended Use
|
## Intended Use
|
||||||
|
|
||||||
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
|
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
|
||||||
|
|
||||||
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
|
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
|
||||||
|
|
||||||
## How to use
|
## How to use
|
||||||
|
|
||||||
This repository contains two versions of Llama-3.2-1B-Instruct, for use with `transformers` and with the original `llama` codebase.
|
This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase.
|
||||||
|
|
||||||
### Use with transformers
|
### Use with transformers
|
||||||
|
|
||||||
@ -296,19 +298,23 @@ huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*"
|
|||||||
|
|
||||||
## Hardware and Software
|
## Hardware and Software
|
||||||
|
|
||||||
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
|
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
|
||||||
|
|
||||||
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
|
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
|
||||||
|
|
||||||
##
|
|
||||||
|
|
||||||
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
|
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
|
||||||
|
|
||||||
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
|
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
|
||||||
| :---- | :---: | ----- | :---: | :---: | :---: |
|
| :---- | :---: | ----- | :---: | :---: | :---: |
|
||||||
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
|
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
|
||||||
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
|
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
|
||||||
| Total | 830k | 86k | | 240 | 0 |
|
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
|
||||||
|
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
|
||||||
|
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
|
||||||
|
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
|
||||||
|
| Total | 833k | 86k | | 240 | 0 |
|
||||||
|
|
||||||
|
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
|
||||||
|
|
||||||
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
|
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
|
||||||
|
|
||||||
@ -318,6 +324,24 @@ The methodology used to determine training energy use and greenhouse gas emissio
|
|||||||
|
|
||||||
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
|
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
|
||||||
|
|
||||||
|
## Quantization
|
||||||
|
|
||||||
|
### Quantization Scheme
|
||||||
|
|
||||||
|
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
|
||||||
|
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
|
||||||
|
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
|
||||||
|
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
|
||||||
|
|
||||||
|
|
||||||
|
### Quantization-Aware Training and LoRA
|
||||||
|
|
||||||
|
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
|
||||||
|
|
||||||
|
### SpinQuant
|
||||||
|
|
||||||
|
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
|
||||||
|
|
||||||
## Benchmarks \- English Text
|
## Benchmarks \- English Text
|
||||||
|
|
||||||
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
|
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
|
||||||
@ -336,35 +360,64 @@ In this section, we report the results for Llama 3.2 models on standard automati
|
|||||||
|
|
||||||
### Instruction Tuned Models
|
### Instruction Tuned Models
|
||||||
|
|
||||||
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
|
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
|
||||||
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: |
|
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
||||||
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 |
|
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
|
||||||
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 |
|
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
|
||||||
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 |
|
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
|
||||||
| Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 |
|
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
|
||||||
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 |
|
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
|
||||||
| | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 |
|
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
|
||||||
| Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 |
|
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
|
||||||
| | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 |
|
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
|
||||||
| | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 |
|
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
|
||||||
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 |
|
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
|
||||||
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 |
|
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
|
||||||
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 |
|
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
|
||||||
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 |
|
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
|
||||||
| | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 |
|
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
|
||||||
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 |
|
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
|
||||||
|
|
||||||
|
\*\*for comparison purposes only. Model not released.
|
||||||
|
|
||||||
### Multilingual Benchmarks
|
### Multilingual Benchmarks
|
||||||
|
|
||||||
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
|
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
|
||||||
| :---: | :---: | :---: | :---: | :---: | :---: |
|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
||||||
| General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 |
|
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
|
||||||
| | | Spanish | 41.5 | 55.1 | 62.5 |
|
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
|
||||||
| | | Italian | 39.8 | 53.8 | 61.6 |
|
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
|
||||||
| | | German | 39.2 | 53.3 | 60.6 |
|
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
|
||||||
| | | French | 40.5 | 54.6 | 62.3 |
|
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
|
||||||
| | | Hindi | 33.5 | 43.3 | 50.9 |
|
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
|
||||||
| | | Thai | 34.7 | 44.5 | 50.3 |
|
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
|
||||||
|
|
||||||
|
\*\*for comparison purposes only. Model not released.
|
||||||
|
|
||||||
|
## Inference time
|
||||||
|
|
||||||
|
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
|
||||||
|
|
||||||
|
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
|
||||||
|
| :---- | ----- | ----- | ----- | ----- | ----- |
|
||||||
|
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
|
||||||
|
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
|
||||||
|
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
|
||||||
|
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
|
||||||
|
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
|
||||||
|
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
|
||||||
|
|
||||||
|
(\*) The performance measurement is done using an adb binary-based approach.
|
||||||
|
(\*\*) It is measured on an Android OnePlus 12 device.
|
||||||
|
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
|
||||||
|
|
||||||
|
*Footnote:*
|
||||||
|
|
||||||
|
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
|
||||||
|
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
|
||||||
|
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
|
||||||
|
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
|
||||||
|
- *RSS size \- Memory usage in resident set size (RSS)*
|
||||||
|
|
||||||
## Responsibility & Safety
|
## Responsibility & Safety
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user