Svngoku commited on
Commit
d8636dd
1 Parent(s): 9cf143d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -53,6 +53,8 @@ You can run this model using `vLLM` or `ollama`. The following instructions are
53
 
54
 
55
  4. **Run the Docker Command**:
 
 
56
  docker run -it \
57
  --pull=always \
58
  -e SANDBOX_USER_ID=$(id -u) \
@@ -68,6 +70,7 @@ docker run -it \
68
  --add-host host.docker.internal:host-gateway \
69
  --name opendevin-app-$(date +%Y%m%d%H%M%S) \
70
  ghcr.io/opendevin/opendevin:main
 
71
 
72
  Replace ipaddress with your actual local IP address and make sure you have your lamma server hosted on all bound ip addresses. i had issues when i tried to use `0.0.0.0` for localhost.
73
 
 
53
 
54
 
55
  4. **Run the Docker Command**:
56
+
57
+ ```sh
58
  docker run -it \
59
  --pull=always \
60
  -e SANDBOX_USER_ID=$(id -u) \
 
70
  --add-host host.docker.internal:host-gateway \
71
  --name opendevin-app-$(date +%Y%m%d%H%M%S) \
72
  ghcr.io/opendevin/opendevin:main
73
+ ```
74
 
75
  Replace ipaddress with your actual local IP address and make sure you have your lamma server hosted on all bound ip addresses. i had issues when i tried to use `0.0.0.0` for localhost.
76