Docker instances created by Cloud Orchestrator enables vhost_user_vsock as true.#1721
Docker instances created by Cloud Orchestrator enables vhost_user_vsock as true.#17210405ysj wants to merge 3 commits intogoogle:mainfrom
Conversation
aa8719b to
1d9e7c6
Compare
1d9e7c6 to
d248bf8
Compare
| # Defaults to false. | ||
| # build_api_credentials_use_gce_metadata= | ||
| # | ||
| # Select a default option of vhost_user_vsock for launching CVD. |
There was a problem hiding this comment.
If vhost_user_vsock is part of the canonical configuration sent in the create cvd request why do we need this as part of the HO configuration as well?
There was a problem hiding this comment.
What I want to achieve is enabling vhost_user_vsock option for any HO in docker instance launched by CO. If vhost_user_vsock is disabled, we cannot launch CF instances across multiple docker instances. It's enabled on arm64, but it's disabled on x86_64.
When HO receives POST /cvds API request, it may or may not contain canonical configuration as input. If so, it calls cvd load to launch CF with given canonical configuration. Or, it calls cvd create to launch CF based on option flags. As cvdr create deals with both kinds via POST /cvds, HO in docker instance should be able to handle both approaches to achieve above one.
There was a problem hiding this comment.
POST /cvds without canonical config is deprecated, please do not add anything new to this path.
Having said that, the source of truth for cuttlefish creation in HO is the canonical config received in the request, this ensure reproducibility, I can copy/paste the config received by the HO and create the same device in my desktop using cvd create directly, this something I do on a regular basis when device create fails in remote hosts. By adding this config parameter as a HO service configuration you are disrupting this behavior adding a new source of configuration towards device creation breaking the HO device creation reproducibility capability.
If you need to alter the canonical config in Docker Deployment, please do that at the Cloud Orchestrator layer, where you can intercept the request and modify it, there's no need to add another source of configurations towards creating cuttlefish devices, canonical configurations should be the only one.
There was a problem hiding this comment.
I would take a look whether intercepting and modifying HTTP request from https://github.com/google/cloud-android-orchestration/blob/main/pkg/app/instances/hostclient.go by CO is applicable or not. If it's applicable, I think CO with DockerIM can adjust canonical config by itself to add vhost_user_vsock as true.
However, I still think I need to support enabling vhost_user_vsock via HO for POST /cvds without canonical config too, though it's deprecated. I'm understanding one of main flows at cvdr is cvdr create with --build_id or build_target options, but it still doesn't use canonical config at HO. Also, I'm not even sure whether canonical config supports build ID/target, which may require help from cvd fetch. I agree with deprecating all flows without canonical config from HO or cvdr, but I think enabling vhost_user_vsock should be orthogonal from the conversion task, not a blocker. HDYT? @ser-io
There was a problem hiding this comment.
You need to make relevant changes to cvdr having commands like cvdr create --build_id --build_target using canonical config behind scenes, this is something I've been trying to do but don't have the time.
google/cloud-android-orchestration#374 is a reference of migrating to canonical config when using local artifacts.
Context: b/383428636
Cloud Orchestrator will set proper environment with google/cloud-android-orchestration#486.