3

I would like to use Intel Extension for Pytorch in my code to increase overall performance. Referred this GitHub(https://github.com/intel/intel-extension-for-pytorch) for installation.

Currently, I am trying out a hugging face summarization PyTorch sample(https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py). Below is the trainer API used for training.

   # Initialize our Trainer
    trainer = Seq2SeqTrainer(
        model=model,
        args=training_args,
        train_dataset=train_dataset if training_args.do_train else None,
        eval_dataset=eval_dataset if training_args.do_eval else None,
        tokenizer=tokenizer,
        data_collator=data_collator,
        compute_metrics=compute_metrics if training_args.predict_with_generate else None,
    )

I am not aware of enabling Ipex in this code. Can anyone help me with this?

Thanks in Advance!

4 Answers4

3

The key changes that are required to enable IPEX are:

    #Import the library:
    import intel_extension_for_pytorch as ipex
    #Apply the optimizations to the model for its datatype:
    model = ipex.optimize(model)
    #torch.channels_last should be applied to both of the model object and data to raise CPU resource usage efficiency.
    model = model.to(memory_format=torch.channels_last)
    data = data.to(memory_format=torch.channels_last)

Also, please check out, https://intel.github.io/intel-extension-for-pytorch/latest/tutorials/examples.html for IPEX examples. Please check out IPEX official page https://www.intel.com/content/www/us/en/developer/tools/oneapi/extension-for-pytorch.html.

Ramya R
  • 163
  • 8
  • When I try `input_ids = input['input_ids'].to(memory_format=torch.channels_last)`, I'm getting the following error: `RuntimeError: required rank 4 tensor to use channels_last format` – Serge Rogatch Apr 24 '23 at 19:35
1

For enabling Intel Extension for Pytorch you just have to give add this to your code,

import intel_extension_for_pytorch as ipex

Importing above extends PyTorch with optimizations for extra performance boost on Intel hardware

After that you have to add this in your code

model = model.to(ipex.DEVICE)
Dharman
  • 30,962
  • 25
  • 85
  • 135
1

First, you will need to subclass the Trainer object and create an custom optimizer as described in the Hugging Face docs

The APIs for using intel_extension_for_pytorch has changed a bit, to use the library, you just have to do:

import intel_extension_for_prytorch as ipex

model, optimizer = ipex.optimize(model, optimizer=optimizer)
unrahul
  • 1,281
  • 1
  • 10
  • 21
1

Currently, Transformers 4.21 has support IPEX.IPEX Graph Optimization with JIT-mode

python run_qa.py
    --model_name_or_path csarron/bert-base-uncased-squad-v1 \
    --dataset_name squad \
    --do_eval \
    --max_seq_length 384 \
    --doc_stride 128 \
    --output_dir /tmp/ \
    --no_cuda \
    --use_ipex \
    --jit_mode_eval