arrow_back

Managing Policies and Security with Istio

Join Sign in

Managing Policies and Security with Istio

1 hour 30 minutes 1 Credit

GSP657

Google Cloud self-paced labs logo

Overview

This lab demonstrates how to leverage Istio's identity and access control policies to help secure microservices running on GKE.

You will use the Hipstershop, an Istio-enabled multi-service sample application to understand and practice:

  • Incrementally adopting Istio mutual TLS authentication across the service mesh

  • Enabling end-user (JWT) authentication for the frontend service

  • Using an Istio access control policy to secure access to the frontend service

Objectives

In this lab, you learn how to perform the following tasks:

  • Complete cluster configuration

  • Download open source Istio with sample configs, and istioctl

  • Deploy Hipster Shop, an Istio-enabled multi-service application

  • Understand authentication and enable service to service authentication with mTLS

  • Enable end-user JWT authentication alongside mTLS

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.

  4. Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.

    Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  5. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left. Navigation menu icon

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:

Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  1. (Optional) You can list the active account name with this command:

gcloud auth list
  1. Click Authorize.

  2. Your output should now look like this:

Output:

ACTIVE: * ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net To set the active account, run: $ gcloud config set account `ACCOUNT`
  1. (Optional) You can list the project ID with this command:

gcloud config list project

Output:

[core] project = <project_ID>

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.

Task 1. Complete cluster configuration

This lab environment has already been partially configured.

  • A GKE cluster named central was created.

  • The open source Istio control plane was applied. This includes telemetry adapters for Prometheus, Grafana, Jaeger, and Kiali.

  • This Istio installation uses the default MTLS_PERMISSIVE mesh-wide security option. This means that all services in the cluster will send unencrypted traffic by default. In PERMISSIVE mode, you can still enforce strict mutual TLS for individual services, which you'll explore below.

Configure cluster access for kubectl

  1. If you haven't already, open up a new Cloud Shell session, then run the following command to set environment variables for the zone and cluster names:

    • Set the Kubernetes cluster name:
    export CLUSTER_NAME=central
    • Set the Region:

      export CLUSTER_REGION={{{project_0.default_region}}}
    • Set the Project:

    export GCLOUD_PROJECT=$(gcloud config get-value project)
  2. Now configure kubectl command line access by running the following command:

    gcloud container clusters \ get-credentials $CLUSTER_NAME \ --region $CLUSTER_REGION \ --project $GCLOUD_PROJECT
  3. Install the demo profile:

    istioctl install --set profile=demo -y
  4. Clone the Official Istio repo:

    git clone https://github.com/istio/istio.git
  5. Apply the Istio sample addons e.g. Prometheus, Grafana, etc.:

    cd istio && kubectl apply -f samples/addons

Verify cluster and Istio installation

  1. Check that your cluster is up and running:

    gcloud container clusters list

    Output:

    NAME: central LOCATION: {{{project_0.default_region}}} MASTER_VERSION: 1.20.10-gke.301 MASTER_IP: 104.154.230.235 MACHINE_TYPE: e2-standard-2 NODE_VERSION: 1.20.10-gke.301 NUM_NODES: 3 STATUS: RUNNING

    Move on only when the cluster's status is set to RUNNING.

  2. Ensure the following Kubernetes services are deployed: istio-egressgateway, istio-ingressgateway and istiod. You'll also see other deployed services:

    kubectl get service -n istio-system

    Output:

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.208.14.57 <none> 3000/TCP 39s istio-egressgateway ClusterIP 10.208.5.194 <none> 80/TCP,443/TCP,15443/TCP 2m12s istio-ingressgateway LoadBalancer 10.208.10.171 34.69.247.210 15021:30346/TCP,80:30077/TCP,443:30054/TCP,31400:31141/TCP,15443:31957/TCP 2m12s istiod ClusterIP 10.208.10.212 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2m23s jaeger-collector ClusterIP 10.208.4.67 <none> 14268/TCP,14250/TCP,9411/TCP 34s kiali ClusterIP 10.208.12.27 <none> 20001/TCP,9090/TCP 29s prometheus ClusterIP 10.208.11.127 <none> 9090/TCP 26s tracing ClusterIP 10.208.12.31 <none> 80/TCP,16685/TCP 35s zipkin ClusterIP 10.208.9.65 <none> 9411/TCP 35s
  3. Ensure the corresponding Kubernetes pods are deployed and all containers are up and running: istio-egressgateway-, istio-ingressgateway- and istiod-:

    kubectl get pods -n istio-system

    Output:

    NAME READY STATUS RESTARTS AGE grafana-6c5dc6df7c-84kts 1/1 Running 0 27m istio-egressgateway-59c6bd6db4-gpz6h 1/1 Running 0 29m istio-ingressgateway-78cc8f99ff-d4hbc 1/1 Running 0 29m istiod-8d4f5cc85-xkzb4 1/1 Running 0 29m jaeger-9dd685668-n2zx8 1/1 Running 0 27m kiali-548475845-rpd64 1/1 Running 0 27m prometheus-699b7cc575-t6xcr 2/2 Running 0 27m

Task 2. Deploy Hipster Shop

Now set up the Hipster Shop sample microservices application and explore the app.

Download lab files from a GitHub repo

  • From Cloud Shell, clone the lab repo:

cd $LAB_DIR git clone https://github.com/GoogleCloudPlatform/istio-samples.git cd istio-samples/security-intro

Deploy the Hipster Shop application

Hipster Shop Overview

In this lab you will deploy the Hipster Shop application, a sample 10-tier microservices application, within a single Kubernetes cluster: central. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.

External user requests start in the Frontend service, and subsequent requests are issued to other microservices.

Hipster Architecture Diagram

Deploy the Hipster Shop application

  1. Retrieve the .yaml which describes the Hipster Shop application:

    mkdir ./hipstershop curl -o ./hipstershop/kubernetes-manifests.yaml https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml
  2. Look at the .yaml:

    cat ./hipstershop/kubernetes-manifests.yaml
  3. Review the many Deployments and Services used by Hipster Shop. Look for containers to see that each Deployment, has one container, for each version of each service in the application.

  4. Inject Istio proxy sidecars to the .yaml using istioctl:

    istioctl kube-inject -f hipstershop/kubernetes-manifests.yaml -o ./hipstershop/kubernetes-manifests-withistio.yaml
  5. Look at the .yaml with Istio proxy sidecars injected:

    cat ./hipstershop/kubernetes-manifests-withistio.yaml

    Now, when you scroll up, and look for containers, you can see extra configuration describing the proxy sidecar containers that will be deployed.

    Note: istioctl kube-inject takes a Kubernetes YAML file as input, and outputs a version of that YAML which includes the Istio proxy.

    You can look at one of the Deployments in the output from istio kube-inject. It includes a second container, the Istio sidecar, along with all the configuration necessary.
  6. In Cloud Shell, use the following command to deploy the application Pods along with injected proxy sidecars:

    kubectl apply -f ./hipstershop/kubernetes-manifests-withistio.yaml

    Output:

    deployment.apps/emailservice created service/emailservice created deployment.apps/checkoutservice created service/checkoutservice created deployment.apps/recommendationservice created service/recommendationservice created deployment.apps/frontend created service/frontend created service/frontend-external created deployment.apps/paymentservice created service/paymentservice created deployment.apps/productcatalogservice created service/productcatalogservice created deployment.apps/cartservice created service/cartservice created deployment.apps/loadgenerator created deployment.apps/currencyservice created service/currencyservice created deployment.apps/shippingservice created service/shippingservice created deployment.apps/redis-cart created service/redis-cart created deployment.apps/adservice created service/adservice created

    Click Check my progress to verify the objective.

    Deploy the application Pods along with injected proxy sidecars
  7. Retrieve the .yaml which describes the HipsterShop service mesh configuration model:

    curl -o ./hipstershop/istio-manifests.yaml https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/istio-manifests.yaml
  8. Look at the .yaml:

    cat hipstershop/istio-manifests.yaml

    Output:

    apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: frontend-gateway spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: frontend-ingress spec: hosts: - "*" gateways: - frontend-gateway http: - route: - destination: host: frontend port: number: 80 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: frontend spec: hosts: - "frontend.default.svc.cluster.local" http: - route: - destination: host: frontend port: number: 80 --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: allow-egress-googleapis spec: hosts: - "accounts.google.com" # Used to get token - "*.googleapis.com" ports: - number: 80 protocol: HTTP name: http - number: 443 protocol: HTTPS name: https --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: allow-egress-google-metadata spec: hosts: - metadata.google.internal addresses: - 169.254.169.254 # Compute Engine metadata server ports: - number: 80 name: http protocol: HTTP - number: 443 name: https protocol: HTTPS ---

    Review the many mesh objects including a Gateway, VirtualServices, and ServiceEntries that describe the initial mesh service access.

  9. In Cloud Shell, deploy the Istio service mesh configuration:

    kubectl apply -f hipstershop/istio-manifests.yaml

    Output:

    gateway.networking.istio.io/frontend-gateway created virtualservice.networking.istio.io/frontend-ingress created virtualservice.networking.istio.io/frontend created serviceentry.networking.istio.io/allow-egress-googleapis created serviceentry.networking.istio.io/allow-egress-google-metadata created

    Click Check my progress to verify the objective.

    Deploy the Istio service mesh configuration
  10. Check that all pods are Running and Ready:

kubectl get pods -n default

Output:

NAME READY STATUS adservice-ddd7f74d6-spwvg 2/2 Running cartservice-6f7f69994f-p7mvh 2/2 Running checkoutservice-665c778458-gsx47 2/2 Running currencyservice-7b859f55d7-h6wzp 2/2 Running emailservice-7c5c5947b-f5mqt 2/2 Running frontend-7fc6677c68-9zpkl 2/2 Running loadgenerator-58864d6d69-m4ltc 2/2 Running paymentservice-778d7b68f6-z4n6s 2/2 Running productcatalogservice-6c96cd8796-77g7p 2/2 Running recommendationservice-594c9bb47b-fdhkg 2/2 Running redis-cart-c65b5fcc8-j66wd 2/2 Running shippingservice-66cd456564-xtt58 2/2 Running

Each pod has 2 containers because each pod now has the injected Istio sidecar proxy. (The cartservice and redis pods live in the cart namespace, but will not be used for the purposes of this lab.)

Use the Hipster Shop application

Open Hipster and explore the application.

  1. Get the Istio ingress gateway IP address:

    kubectl get -n istio-system \ service istio-ingressgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}';echo

    Output:

    104.197.97.256

    Your IP will be different. Use this IP to access Hipster in the next Task.

  2. Copy and paste the Istio ingress gateway IP address to a web browser tab.

    You should see the Hipster Shop home page, at http://[EXTERNAL_IP].

    Hipster Shop UI

  3. Verify that the Hipster app is fully functional by browsing the products, adding products to your cart, checking out, etc.

    Now you're ready to enforce security policies for this application.

Task 3. Understand authentication and enable service-to-service authentication with mTLS

In this task, you enable service-to-service authentication with mTLS, for a single service, then for an entire namespace.

Authentication refers to identity: who is this service? Who is this end-user? Can I trust that they are who they say they are? Authentication provides strong identity and secure communication.

One benefit of using Istio that it provides uniformity for both service-to-service and end-user-to-service authentication. Istio abstracts away authentication from your application code, by tunneling all service-to-service communication through the Envoy sidecar proxies.

And by using a centralized Public-Key Infrastructure, Istio provides consistency to make sure authentication is set up properly across your mesh.

Further, Istio allows you to adopt mTLS on a per-service basis, or easily toggle end-to-end encryption for your entire mesh.

Open Kiali to watch security changes

Use the Kiali Graph panel, with Security to visually see mTLS settings.

  1. In Cloud Shell, set up port-forwarding for Kiali:

    kubectl -n istio-system port-forward \ $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 8080:20001

    This leaves the port-forwarding command running in Cloud Shell.

  2. Open the Kiali UI:

    • Click the Web preview button in the Cloud Shell toolbar
    • Click Preview on Port 8080

    This is the Overview page of your mesh. This displays all the namespaces that have services in your mesh. You can adjust the time range for overview data by adjusting the dropdown at the top of the page that is initialized to Last 1m.

View the default namespace service graph visualization

  1. Click Graph on the left menu panel.

  2. Click Select Namespaces, then check default to view the service graph for the default namespace, which holds your running Hipstershop services.

    You can use the bottom toolbar to resize the graph rendering.

  3. Click the Display dropdown, and check the Security box.

    Note: Notice that in permissive mode, if you move your mouse over the frontend, no locks appear on the service graph.

    Kiali graph

    After you use Hipster Shop to add products and checkout, you will see an istio-ingressgateway node in the service graph adjacent to frontend.

Enable mTLS for one service: frontend

Right now, the cluster is in PERMISSIVE mTLS mode, meaning all service-to-service ("east west") mesh traffic is unencrypted by default.

  • First, toggle mTLS for the frontend microservice.

For both inbound and outbound requests for the frontend to be encrypted, you need two Istio resources: a Policy (require TLS for inbound requests) and a DestinationRule (TLS for the frontend's outbound requests).

  1. In Cloud Shell, open another session. Click the + in the Cloud Shell tab-bar to Add Cloud Shell session.

  2. In this new session, return to the lab directory:

    cd ~/istio-samples/security-intro
  3. View the PeerAuthentication and DestinationRule resources:

    cat ./manifests/mtls-frontend.yaml

    Output:

    apiVersion: "security.istio.io/v1beta1" kind: "PeerAuthentication" metadata: name: "frontend" namespace: "default" spec: selector: matchLabels: app: frontend mtls: mode: STRICT --- apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "frontend" spec: host: "frontend.default.svc.cluster.local" trafficPolicy: tls: mode: ISTIO_MUTUAL
  4. Apply the resources to the cluster:

    kubectl apply -f ./manifests/mtls-frontend.yaml

    Output:

    peerauthentication.security.istio.io/frontend created destinationrule.networking.istio.io/frontend created

    Click Check my progress to verify the objective.

    Enable mTLS for one service: frontend
  5. Verify that mTLS is enabled for the frontend by trying to reach the frontend from the istio-proxy container of a different mesh service, within the cluster.

    • First, try to reach frontend from productcatalogservice with plain HTTP:

    kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl http://frontend:80/ -o /dev/null -s -w '%{http_code}\n'

    You should see:

    000 command terminated with exit code 56
    • Exit code 56 means "failure to receive network data" from the curl command.

    This is expected.

  6. Go back to the Kiali graph and look for security updates.

    After a few seconds, move your mouse over frontend and notice the locks in and out of the frontend service. This indicates mTLS is required for traffic inbound and outbound.

    Kiali locks frontend

    You may need to zoom in to see the locks.

    The TLS key and certificate for productcatalogservice comes from Istio's Citadel component, running centrally.

    Citadel generates keys and certs for all mesh services, even if the cluster-wide mTLS is set to PERMISSIVE.

Enable mTLS for an entire namespace: default

As an alternative to adopting mTLS for individual services, you can introduce mTLS for an entire namespace, default for example. Doing this will automatically encrypt service-to-service traffic for every Hipster Shop service running in the default namespace.

  1. In Cloud Shell, view the PeerAuthentication, and DestinationRule resources for this approach:

    cat ./manifests/mtls-default-ns.yaml

    Output:

    apiVersion: "security.istio.io/v1beta1" kind: "PeerAuthentication" metadata: name: "default" namespace: "default" spec: mtls: mode: STRICT --- # apiVersion: networking.istio.io/v1alpha3 # kind: DestinationRule # metadata: # name: default # namespace: default # spec: # host: '*.default.svc.cluster.local' # trafficPolicy: # tls: # mode: ISTIO_MUTUAL

    Notice the same resources, PeerAuthentication and DestinationRule, are used for namespace-wide mTLS as were for service-specific mTLS.

  2. Apply the resources:

    kubectl apply -f ./manifests/mtls-default-ns.yaml

    Output:

    peerauthentication.security.istio.io/default created

    Click Check my progress to verify the objective.

    Enable mTLS for an entire namespace: default
  3. Go back to the Kiali graph and look for security updates.

    After a few seconds, notice that all services in the default namespace now require mTLS for inbound and outbound traffic.

    Kiali locks all

    You can also manually enable mesh-wide mTLS by applying a MeshPolicy resource to the cluster.

    In this task you learned how you can incrementally adopt encrypted service communication using Istio, at your own pace, without any code changes.

Task 4. Enable end-user JWT authentication alongside mTLS

After enabling service-to-service authentication in the default namespace, you can enforce end-user ("origin") authentication for the frontend service, using JSON Web Tokens (JWT).

Note: It is recommended to only use JWT authentication alongside mTLS, and not JWT by itself. This is because plaintext JWTs are not themselves encrypted, only signed. Forged or intercepted JWTs could compromise your service mesh.

In this task, you build on the mutual TLS authentication already configured for the default namespace.

First, update the previously created Istio Policy frontend-authn to enforce JWT authentication for inbound requests to the frontend service.

  1. View the authentication Policy resource:

    cat ./manifests/jwt-frontend-request.yaml

    Output:

    apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: frontend namespace: default spec: selector: matchLabels: app: frontend jwtRules: - issuer: "testing@secure.istio.io" jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.5/security/tools/jwt/samples/jwks.json"

    This policy, a RequestAuthentication resource, uses Istio's test JSON Web Key Set (jwksUri) as the public key used to verify incoming JWTs. When you apply this Policy, Istio's Pilot component will pass down this public key to the frontend's sidecar proxy, which will allow it to accept or deny requests.

    This resource is an update to the existing frontend-authn Policy created in the last section. This is because Istio only allows one service-matching Policy to exist at a time.

  2. Apply the updated frontend Policy:

    kubectl apply -f ./manifests/jwt-frontend-request.yaml

    Output:

    requestauthentication.security.istio.io/frontend created
  3. Set a local TOKEN variable with an invalid token. This TOKEN is used on the client-side to make requests to the frontend:

    TOKEN=helloworld; echo $TOKEN
  4. Check if the frontend can be reached with an invalid JWT:

    kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \ -- curl http://frontend:80/ -o /dev/null --header "Authorization: Bearer $TOKEN" -s -w '%{http_code}\n'

    Output:

    000 command terminated with exit code 56

    Exit code 56 means "failure to receive network data" from the curl command.

    This is expected. The invalid token was not able to be validated per the policy.

  5. View an AuthorizationPolicy resource which requires a JWT:

    cat ./manifests/jwt-frontend-authz.yaml

    This policy declares that all requests to the frontend workload must have a JWT.

    Output:

    apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: default spec: selector: matchLabels: app: frontend action: ALLOW rules: - from: - source: requestPrincipals: ["testing@secure.istio.io/testing@secure.istio.io"]

    In this task, you observed how the frontend service uses authentication with a JWT policy and an authorization policy.

Understand Istio authorization

Authentication refers to the who by providing strong identity and secure service-to-service and end-user-to-service communication.

Authorization refers to the what: what a service or user is allowed to do.

Istio Authorization is designed to be simple, flexible, and high performance. Istio authorization policies are defined using .yaml and enforced by Envoy proxies running an authorization engine.

AuthorizationPolicy resources include a selector and a list of rules. The selector specifies the target: which specifies to which namespace and/or workload(s) the policy applies.

The rules in an AuthorizationPolicy specify who is allowed to do what under which conditions. Rules use from, to, and when lists.

Congratulations!

In this lab, you incrementally adopted mutual TLS authentication across your service mesh and enabled end-user (JWT) authentication for the frontend service to secure access to the frontend service.

Finish your quest

Continue your quest with Anthos Service Mesh. A quest is a series of related labs that form a learning path. Completing this quest earns you a badge to recognize your achievement. You can make your badge or badges public and link to them in your online resume or social media account. Enroll in any quest that contains this lab and get immediate completion credit. See the Google Cloud Skills Boost catalog to see all available quests.

Next steps / Learn more

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated September 25, 2022

Lab Last Tested August 11, 2022

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.