Setup Users

5 minutes  

In this section, we’ll create users for each workshop participant, with a namespace and resource quota for each.

Create User Namespaces and Resource Quotas

cd user-setup
./create-namespaces.sh

Create Users

Create an HTPasswd file with participant credentials, then replace the ROSA-managed HTPasswd IdP with a custom one:

./create-users.sh

Re-create the cluster-admin user then login again

Re-create the cluster-admin user then login again:

rosa create admin -c rosa-test
oc login <Cluster API URL> --username cluster-admin --password <cluster admin password>

Add Role to Users

Grant each user access to their namespace only:

./add-role-to-users.sh

Note: if you see errors such as the following, they can be safely ignored

Warning: User 'participant1' not found
clusterrole.rbac.authorization.k8s.io/admin added: "participant1"

Test Login

Install the OpenShift CLI

To test the logins from our local machine, we’ll need to install the OpenShift CLI.

For MacOS, we can install the OpenShift CLI using the Homebrew package manager:

brew install openshift-cli

For other installation options, please refer to the OpenShift documentation.

Login as Workshop User

Try logging in as one of the workshop users from your local machine:

oc login https://api.<cluster-domain>:443 -u participant1 -p 'TempPass123!'

It should say something like:

Login successful.

You have one project on this server: "workshop-participant-1"

Confirm Access to the LLM

Let’s ensure we can access the LLM from the workshop user account.

Start a pod that has access to the curl command:

oc run curl --rm -it --image=curlimages/curl:latest \
  --overrides='{
    "spec": {
      "containers": [{
        "name": "curl",
        "image": "curlimages/curl:latest",
        "stdin": true,
        "tty": true,
        "command": ["sh"],
        "resources": {
          "limits": {
            "cpu": "50m",
            "memory": "100Mi"
          },
          "requests": {
            "cpu": "50m",
            "memory": "100Mi"
          }
        }
      }]
    }
  }'

Then run the following command to send a prompt to the LLM:

curl -X "POST" \
 'http://meta-llama-3-2-1b-instruct.nim-service:8000/v1/chat/completions' \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
        "model": "meta/llama-3.2-1b-instruct",
        "messages": [
        {
          "content":"What is the capital of Canada?",
          "role": "user"
        }],
        "top_p": 1,
        "n": 1,
        "max_tokens": 1024,
        "stream": false,
        "frequency_penalty": 0.0,
        "stop": ["STOP"]
      }'
{
  "id": "chatcmpl-2ccfcd75a0214518aab0ef0375f8ca21",
  "object": "chat.completion",
  "created": 1758919002,
  "model": "meta/llama-3.2-1b-instruct",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "reasoning_content": null,
        "content": "The capital of Canada is Ottawa.",
        "tool_calls": []
      },
      "logprobs": null,
      "finish_reason": "stop",
      "stop_reason": null
    }
  ],
  "usage": {
    "prompt_tokens": 42,
    "total_tokens": 50,
    "completion_tokens": 8,
    "prompt_tokens_details": null
  },
  "prompt_logprobs": null
}