update modeling_baichuan.py for torchscript mode with past_kv

#2

to enable model inference with use_cache and return_dict from model.config.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment