Phi-4-Mini Technical Report: Compact yet Powerful Multimodal
Language Models via Mixture-of-LoRAs
核心贡献
Phi-4-Multimodal employs a novel “mixture of LoRAs”
technique, enabling multimodal capabilities by integrating modality-specific LoRAs while keeping the
base language model entirely frozen.
这篇工作的核心特色就在于使用LORA的方法把其他模态(论文里是视觉和语音)装到预训练好的语言模型上。
乍看之下这是个更加轻量的模态融合方案,而且就LORA的这种模式而言,也更有可能对模型文本上能力损害更小。
前向长什么样
这里的Vision Encoder仍然是ViT架构,AudioEncoder还是Conformer
Vision Encoder和Audio Encoder之后都接了一个线性层(也就是图上的projector)把hidden映射到文本模型一样的hidden_size上(都是3072)
然后再LLM上 Vision和Audio各初始并训练一个单独的LORA,从不同模态映射回来的token会过对应的LORA。
怎么训练
总体节奏
语言模型–>Vision到文本的联调–>Audio到文本的联调–>Vision 到语音文本的联调
作者没有按照Pretrain和Post-Train的方式组织论文,主要是为了把各个模态的训练讲清楚。我这里切换一下视角。
阶段 | 语言模型 | 视觉模态 | 语音/音频模态 |
---|---|---|---|
Pretrain | 基础Transformer + 高质量文本预训练 | SigLIP编码器 + Projector对齐 | Conformer编码器 + ASR对齐 |
Post-train | 任务微调(代码/推理) + DPO优化 | LoRAv适配器 + 多帧生成 | LoRA_A适配器 + 多任务指令微调 |
数据规模 | 5T token (Pretrain) | 0.5T图文 (Pretrain) + 0.3T指令 (Post) | 200万小时 (Pretrain) + 100M (Post) |
视觉模态的训练
这部分训练我画了个图(不是作者画的,错了赖我)
虚线空心图案表示:当前阶段还没有
斜线填充图案表示:当前阶段冻结
视频理解部分是以多帧的形式,在post-train阶段训练的。
音频模态的训练
这部分的阶段远没有视觉模态那么复杂,就两个阶段
- ASR Pretrain
- 多任务post-training
多个post-train任务包括:
The post-training data covers a variety of tasks, including automatic speech
recognition (ASR), automatic speech translation (AST), speech question answering (SQA), spoken query
question answering (SQQA), speech summarization (SSUM), and audio understanding (AU)
Vision+Audio联调
这里有两个关键点
- 只有视觉的相关参数是un-freeze状态
- 没有用视频数据做微调(因为视频本身是音频和图像stream天然联动的数据,然而这个工作却没有用在这个场景上), 用的是TTS合成的speech数据
性能
- 图像水平能在跑分上和Qwen2.5-VL的3B,比Qwen-vl 7B和Gemini都低一些
- 文本水平比VL模型强一些
其他需要关注的点(倒不一定高明)
- LORA很大(当然这也算预料之中)
这是图像的LORA的配置,用到了512,考虑到模型本身的hidden也就3072,这个512不算小了;speech的LORA alpha 有640。
{
"auto_mapping": null,
"base_model_name_or_path": "TBA",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"lora_alpha": 512,
"lora_dropout": 0.0,
"modules_to_save": [],
"peft_type": "LORA",
"r": 256,
"revision": null,
"target_modules": [
"qkv_proj",
"o_proj",
"gate_up_proj",
"down_proj"
],
"task_type": "CAUSAL_LM"
}
- 貌似没有针对视频做特殊的相关处理和优化
文章里没有详细说明针对视频流做Qwen-VL那样的的采样/patch等等等等的优化,基本上就是把视频当图片看。这点我在作者preprocess的脚本里也查了一下。
有兴趣可以看一眼他preprocessor的开始部分
class Phi4MMProcessor(ProcessorMixin):
r"""
Constructs a Phi4MM processor which raps an image processor, a audio processor, and a GPT tokenizer into a single processor.
[`Phi4MMProcessor`] offers all the functionalities of [`Phi4MMImageProcessor`] and [`GPT2Tokenizer`]. See the
[`~Phi4MMProcessor.__call__`] and [`~Phi4MMProcessor.decode`] for more information.
Args:
image_processor ([`Phi4MMImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`GPT2Tokenizer`], *optional*):
The tokenizer is a required input.
"""
attributes = ["image_processor", "audio_processor", "tokenizer"]
tokenizer_class = "GPT2TokenizerFast"
image_processor_class = "AutoImageProcessor" # Phi4MMImageProcessor will be registered later
audio_processor_class = "AutoFeatureExtractor" # Phi4MMAudioFeatureExtractor will be registered later
def __init__(self, image_processor, audio_processor, tokenizer):
self.image_processor = image_processor
self.audio_processor = audio_processor
self.tokenizer = tokenizer
def __call__(
self,
text: Union[TextInput, List[TextInput]],
images: Optional[ImageInput] = None,
audios: Optional[AudioInputs] = None,
padding: Union[bool, str, PaddingStrategy] = False,
truncation: Optional[Union[bool, str, TruncationStrategy]] = None,
max_length=None,
return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
) -> BatchFeature:
"""
Main method to prepare for the model one or several sequences(s) and image(s). This method forards the `text`
and `kwargs` arguments to GPT2Tokenizer's [`~GPT2Tokenizer.__call__`] if `text` is not `None` to encode
the text. To prepare the image(s), this method forwards the `images` and `kwrags` arguments to
Phi4MMImageProcessor's [`~Phi4MMImageProcessor.__call__`] if `images` is not `None`. Please refer to the doctsring
of the above two methods for more information.
Args:
text (`str`, `List[str]`, `List[List[str]]`):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. Both channels-first and channels-last formats are supported.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
lengths).
max_length (`int`, *optional*):
Maximum length of the returned list and optionally padding length (see above).
truncation (`bool`, *optional*):
Activates truncation to cut input sequences longer than `max_length` to `max_length`.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
- `'tf'`: Return TensorFlow `tf.constant` objects.
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return NumPy `np.ndarray` objects.
- `'jax'`: Return JAX `jnp.ndarray` objects.
Returns:
[`BatchFeature`]: A [`BatchFeature`] with the following fields:
- **input_ids** -- List of token ids to be fed to a model.
- **input_image_embeds** -- Pixel values to be fed to a model.
- **image_sizes** -- List of tuples specifying the size of each image in `input_image_embeds`.
- **image_attention_mask** -- List of attention masks for each image in `input_image_embeds`.
- **input_audio_embeds** -- Audio embeddings to be fed to a model.
- **audio_embed_sizes** -- List of integers specifying the size of each audio in `input_audio_embeds`.
- **attention_mask** -- List of indices specifying which tokens should be attended to by the model.
"""
image_inputs = self.image_processor(images, return_tensors=return_tensors) if images is not None else {}
audio_inputs = self.audio_processor(audios, return_tensors=return_tensors) if audios is not None else {}
inputs = self._convert_images_audios_text_to_inputs(
image_inputs,
audio_inputs,
text,
padding=padding,
truncation=truncation,
max_length=max_length,
return_tensors=return_tensors,
)
# idenfity the input mode
if len(image_inputs) > 0 and len(audio_inputs) > 0:
input_mode = InputMode.VISION_SPEECH
elif len(image_inputs) > 0:
input_mode = InputMode.VISION
elif len(audio_inputs) > 0:
input_mode = InputMode.SPEECH
else:
input_mode = InputMode.LANGUAGE
inputs["input_mode"] = torch.tensor([input_mode.value], dtype=torch.long)
return inputs
- 先把需要的能力的数据前推到pre-train这个阶段
正篇文章提到mini和reason怎强的部分并不多。
First, building on Phi-4-Mini, the model is pre-trained on approximately 60
billion reasoning CoT tokens generated by frontier reasoning LLMs, after which rejection sampling is
employed to filter out incorrect outputs.
整体评价
- 这种多模态的训练方案在文本能力的保持上貌似强一些。
- 在训练的过程中,多个模态的训练互相牵扯的板块占比比较低。
总的来讲,这个方案看起来更加轻巧,加上1& 2的模式,很有可能在更大的参数规模的上做到对话系统需要的既有良好的语音理解能力,也有很强的思考能力,而且在文本模型优化上,也不用顾忌语音端。