• View and manage quotas

View and manage quotas ¶

To prevent system capacities from being exhausted without notification, you can set up quotas. Quotas are operational limits. For example, the number of gigabytes allowed for each project can be controlled so that cloud resources are optimized. Quotas can be enforced at both the project and the project-user level.

Typically, you change quotas when a project needs more than ten volumes or 1 TB on a compute node.

Using the Dashboard, you can view default Compute and Block Storage quotas for new projects, as well as update quotas for existing projects.

Using the command-line interface, you can manage quotas for the OpenStack Compute service , the OpenStack Block Storage service , and the OpenStack Networking service (For CLI details, see OpenStackClient CLI reference ). Additionally, you can update Compute service quotas for project users.

The following table describes the Compute and Block Storage service quotas:

Quota Descriptions

View default project quotas ¶

Log in to the dashboard and select the admin project from the drop-down list.

On the Admin tab, open the System tab and click the Defaults category.

The default quota values are displayed.

You can sort the table by clicking on either the Quota Name or Limit column headers.

Update project quotas ¶

Click the Update Defaults button.

In the Update Default Quotas window, you can edit the default quota values.

The dashboard does not show all possible project quotas. To view and update the quotas for a service, use its command-line client. See OpenStack Administrator Guide .

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License . See all OpenStack Legal Documents .

  • Install Guides
  • User Guides
  • Configuration Guides
  • Operations and Administration Guides
  • Contributor Guides
  • Deutsch (German)
  • Français (French)
  • Bahasa Indonesia (Indonesian)
  • Italiano (Italian)
  • 日本語 (Japanese)
  • 한국어 (Korean)
  • Português (Portuguese)
  • Türkçe (Türkiye)
  • 简体中文 (Simplified Chinese)

horizon 23.4.1.dev31

  • Installation Guide
  • Configuration Guide
  • User Documentation
  • Customize and configure the Dashboard
  • Set up session storage for the Dashboard
  • Create and manage images
  • Create and manage roles
  • Manage projects and users
  • Manage instances
  • Manage flavors
  • Manage volumes and volume types
  • View services information
  • Create and manage host aggregates
  • Contributor Documentation

Page Contents

  • View default project quotas
  • Update project quotas

How Resource Quotas Work in Rancher Projects

Resource quotas in Rancher include the same functionality as the native version of Kubernetes . However, in Rancher, resource quotas have been extended so that you can apply them to projects.

In a standard Kubernetes deployment, resource quotas are applied to individual namespaces. However, you cannot apply the quota to your namespaces simultaneously with a single action. Instead, the resource quota must be applied multiple times.

In the following diagram, a Kubernetes administrator is trying to enforce a resource quota without Rancher. The administrator wants to apply a resource quota that sets the same CPU and memory limit to every namespace in his cluster ( Namespace 1-4 ) . However, in the base version of Kubernetes, each namespace requires a unique resource quota. The administrator has to create four different resource quotas that have the same specs configured ( Resource Quota 1-4 ) and apply them individually.

Resource quotas are a little different in Rancher. In Rancher, you apply a resource quota to the project, and then the quota propagates to each namespace, whereafter Kubernetes enforces your limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can override it.

The resource quota includes two limits, which you set while creating or editing a project:

Project Limits:

This set of values configures a total limit for each specified resource shared among all namespaces in the project.

Namespace Default Limits:

This set of values configures the default quota limit available for each namespace for each specified resource. When a namespace is created in the project without overrides, this limit is automatically bound to the namespace and enforced.

In the following diagram, a Rancher administrator wants to apply a resource quota that sets the same CPU and memory limit for every namespace in their project ( Namespace 1-4 ). However, in Rancher, the administrator can set a resource quota for the project ( Project Resource Quota ) rather than individual namespaces. This quota includes resource limits for both the entire project ( Project Limit ) and individual namespaces ( Namespace Default Limit ). Rancher then propagates the Namespace Default Limit quotas to each namespace ( Namespace Resource Quota ) when created.

Rancher Resource Quota Implementation

Let's highlight some more nuanced functionality for namespaces created within the Rancher UI. If a quota is deleted at the project level, it will also be removed from all namespaces contained within that project, despite any overrides that may exist. Further, updating an existing namespace default limit for a quota at the project level will not result in that value being propagated to existing namespaces in the project; the updated value will only be applied to newly created namespaces in that project. To update a namespace default limit for existing namespaces you can delete and subsequently recreate the quota at the project level with the new default value. This will result in the new default value being applied to all existing namespaces in the project.

Before creating a namespace in a project, Rancher compares the amounts of the project's available resources and requested resources, regardless of whether they come from the default or overridden limits. If the requested resources exceed the remaining capacity in the project for those resources, Rancher will assign the namespace the remaining capacity for that resource.

However, this is not the case with namespaces created outside of Rancher's UI. For namespaces created via kubectl , Rancher will assign a resource quota that has a zero amount for any resource that requested more capacity than what remains in the project.

To create a namespace in an existing project via kubectl , use the field.cattle.io/projectId annotation. To override the default requested quota limit, use the field.cattle.io/resourceQuota annotation.

Note that Rancher will only override limits for resources that are defined on the project quota.

In this example, if the project's quota does not include configMaps in its list of resources, then Rancher will ignore configMaps in this override.

Users are advised to create dedicated ResourceQuota objects in namespaces to configure additional custom limits for resources not defined on the project. Resource quotas are native Kubernetes objects, and Rancher will ignore user-defined quotas in namespaces belonging to a project with a quota, thus giving users more control.

The following table explains the key differences between the two quota types.

  • Español – América Latina
  • Português – Brasil
  • Cloud Quotas

Set the quota project

This page describes how to set a quota project for your client-based APIs. For information about what the quota project is, how to set the quota API, and how the quota project is determined, see About the quota project.

When you make a request to a client-based API, if a quota project cannot be identified, the request fails.

The quota project can be set in multiple ways, and the project will be verified by checking the following options. The order listed is the order of their precedence.

  • The quota project set in the environment or in the request .
  • If you use an API key to provide credentials for a request, the project associated with the API key is used as the quota project.
  • If you use the Google Cloud CLI to get your access token, and you've authenticated to the gcloud CLI with your user credentials, the gcloud CLI shared project is sometimes used as the quota project. Not all client-based APIs fall back on the shared project.
  • If the principal for the API call is a service account, including by impersonation, the project associated with the service account is used as the quota project.
  • If the principal for the API is a workforce identity federation user, the workforce pools user project is used as the quota project.

If none of the above checks yield a quota project, the request fails.

There are several ways to set quota projects. If the quota project is specified by more than one method, the following precedence is applied:

  • Programmatically
  • Environment variable
  • Credentials used to authenticate the request

Set the quota project programmatically

You can explicitly set the quota project in your application. This method overrides all other definitions. The principal used to authenticate the request must have the required permission on the specified quota project.

How you set the quota project programmatically depends on whether you're using a client library, gcloud CLI, or REST request.

Client library

You can set the value for the quota project by using client options when you create the client. This method works well if you want to control the value for your quota project from your application, regardless of what environment it's running in.

For more information about implementing client options, see your client library documentation.

You can set the quota project for all gcloud CLI commands by using the billing/quota_project property in your gcloud CLI configuration. You can also set the quota project for a specific command by using the --billing-project flag, which takes precedence over the configuration property.

For more information about gcloud CLI configurations, see the gcloud config reference page . For more information about the --billing-project flag, see the global flags reference.

REST request

You can specify the quota project in a REST request using the x-goog-user-project header. The principal making the request must have the required permissions on the quota project.

For more information and sample code, see Set the quota project with a REST request .

Set the quota project using an environment variable

Client libraries for some languages support setting the quota project using an environment variable. This approach can be helpful if you want to set the quota project differently in different shells, or to override the quota project associated with the credential. The principal for any request must have the required permissions on the quota project specified by the environment variable.

The environment variable is language dependent:

Set the quota project using authentication credentials

If the quota project isn't specified, the authentication libraries try to determine it from the credentials that were used for the request. This process depends on the type of credentials that were used to authenticate the request:

  • Service account – The project associated with the service account is used as the quota project.
  • User credentials – For a local development environment, Application Default Credentials finds your user credentials from the local ADC file. That file can also specify a quota project. If you have the project set in your Google Cloud CLI config, and you have the required permissions on that project, the quota project is set by default when you create the local ADC file. You can also set the ADC quota project by using the auth application-default set-quota-project command.
  • API keys – When you use an API key to provide credentials for a request, the project associated with the API key is used as the quota project.

Permission required to set and use the quota project

To get the permission that you need to set a project as the quota project, or use that quota project in a request, ask your administrator to grant you the Service Usage Consumer ( roles/serviceusage.serviceUsageConsumer ) IAM role on the project. For more information about granting roles, see Manage access .

You might also be able to get this permission with custom roles or other predefined roles .

If you use a project you created as your quota project, you have the necessary permissions.

What's next

  • About the quota project
  • Learn more about Application Default Credentials
  • Get more information about authentication
  • Understand quotas

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-02-14 UTC.

What's on This Page: ×

Was this helpful?

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The feedback you submit here is used only to help improve this page.

That’s great! Thank you for your feedback!

Thank you for your feedback!

About Quotas

Quotas allow you to keep track of how many respondents meet a condition in your survey. You can also specify what will happen to your respondents once a quota has been met, such as ending the survey prematurely or deleting the extra responses.

The purpose of a quota is to make sure that you gather only the exact amount of data required for your study.

Quotas save automatically as you make changes. A timestamp will appear along the top next to the quota’s name to show the last time it was saved.

Creating Quotas

left menu is expanded in the survey tab. quotas is highlighted at the bottom and has a gauge icon

  • Select Quotas .

Quotas window has an Add a Quota button in the center and an Add quota button on the bottom towards the upper-right

  • Simple logic quota : Define when a respondent meets your quota. The quota count will be incremented each time a respondent comes into the survey who meets the conditions you set. Despite the name, you can actually build quite complex conditions, if you need to; however, simple logic quotas won’t have the percentage-based capabilities of the cross logic quota.
  • Cross logic quota : Use percentages to define how respondents are distributed in your quota. Specify which percentage of respondents should match each criteria, and then Qualtrics will do the heavy lifting of figuring out how many respondents are needed for each combination of conditions.
  • Click Create quota .

Right side of page, conditions built out that say things like, if the respondent said they were married and male. Below that, a dropdown for slecting what happens if they meet that condition. To the left, can edit the number of people you want to meet this condition. Bottom-right, save button

  • Expand Quota options .

Quota options. Along top sticky bar, options for naming and setting quota counts

  • Click New Quota to rename your quota.
  • Click the 100 to set the quota target . This is the maximum number of people you want to meet the conditions you set. See the linked section for more details.
  • Your changes are saved automatically. Publish them to respondents when you are ready.

Dropdown next to a quota expanded to show the duplicate option just mentioned

Setting Quota Counts and Targets

Next to every quota, you’ll see a pair of numbers – for example, 0/100. The number on the left of the slash (e.g., 0) is the quota count. The number on the right (e.g., 100) is the quota target.

Quota Target

To ensure that you gather only the exact amount of data you require for your study, it is important to set a number limit for your quota. The quota target is the maximum number of respondents you want to take your survey who fulfill the quota conditions.

To set the quota target, all you need to do is click the current quota target and change it.

When quotas are listed, column labeled Quota target with numbers you can click into

Quota Count

The quota count is the number that indicates how many respondents have taken your survey and met the conditions for the quota. This number automatically changes as new respondents take your survey and meet the specified conditions.

You can also manually adjust the quota count. This can be useful in circumstances where the quota was added after you already started collecting responses, or where the quota was created before launching the survey, but it was set up incorrectly. You might also use this option to reset your quotas to zero before starting another round of your study.

Adjusting Simple Quota Counts

Column named quota count where all of the quotas are listed in rows

  • Click Save .

new window where you are asked to confirm the change

Adjusting Cross Logic Quota Counts

For cross logic quotas, you cannot adjust one overall quota count, because it is not just one quota, but technically a combination of many demographic quotas. Instead, you need to individually adjust each segment of the cross logic quota.

Cross logic quota highlighted

  • Find the segment you want to adjust and click the value under Quota count .

New window where you set a quota count

  • Repeat steps for each segment you want to adjust.

To schedule an automatic Quota reset

You can also automatically reset your quotas after a certain amount of time. This can be useful if, for example, you need to collect a specific number of responses each month.

You can schedule when your quota resets its count to zero. This can be useful for studies where you send the same survey and plan to use the same quotas over and over again.

Dropdown arrow next to quota reveals Scheduled Quota option

  • Select Scheduled quota .

new window where you schedule quotas

  • Choose the date the quota should start resetting. Qtip: Quotas will reset at midnight on this date according to the time zone set in your account .
  • Choose how many times the quota should reset before stopping. Example: For example, if you reset the quota count every month, then you can say the quota stops resetting after 6 months. If the quota resets every 20 days, you can say the quota stops resetting after five 20 day periods have passed.

Setting Simple Quota Conditions

The next step in setting up a quota is to specify conditions that must be true for a respondent to increment the quota. Your quota will be incremented when a respondent meets the condition(s) you have set and finishes the survey.

Example: Only respondents who say they’re married will increment this quota.

Increment the quota when a response is submitted that meets the following criteria - question Q8 are you married? Married is selected

This section will discuss setting conditions for simple logic quotas. Your account might also include the option to create what is called a cross logic quota. Refer to the Cross Logic Quotas section below for information on setting logic for this type of quota.

To set Quota conditions

First field where what the quota's based off (eg, question) is selected, second field where the specific item (eg, Q1) is selected, and the last fields where the choices are selected

  • Then fill out the menus that follow with your specific criteria. Example: We want to limit how many married people come into the survey. So first, we choose the question where we ask the respondent’s marital status, Q8. Then the next fields specify that if “Married” is what the respondent “Selected,” they will count towards the quota.
  • If desired, click the green plus sign ( + ) to the right of the condition to add another condition.

Condition set with multiple criteria

  • If you want to separate out additional conditions, select Add Logic Set . Example: In the screenshot above, we’re making a quota for people who are married and employed. The respondent can select 1 of 2 answers to indicate they are employed, but they must also indicate that they’re married to qualify for this quota. Therefore, we separate the employment status conditions from the marital status condition into different logic sets.

Specifying What Happens When a Quota is Met

Under Quota options, you can decide what should happen to your respondents once your quota has been met. Here, we’ll describe all of the options available.

Quota options dropdown at the bottom of a quota

End Current Survey

“End current survey” does exactly what its name tells you. If a respondent meets the conditions for a quota and that quota has already been filled, the respondent will immediately be directed out of the survey.

Under Customize end of survey experience , you’ll be able to customize what happens when respondents get to the end of the survey. These are same message options and actions available in an end of survey element .

End current survey is selected. Under that are 2 columns of options - message options, where you can decide from a default message, custom message, or redirecting to a URL, and then additional actions on the right, which specify other things that should happen when a respondent finishes the survey

Although there are many options available, there are 2 settings in particular that you may want to consider:

  • Do not increment quota counts: By default, a respondent who is terminated from the survey due to the quota being full will still cause the quota to increment. Check this box to prevent over-quota respondents from incrementing quota counts.
  • Do not record survey response: Selecting this option will prevent all over-quota responses from being recorded in your data, potentially saving you from using up unwanted responses. Responses removed this way cannot be retrieved . For more information on screening out respondents, see Screen-Out Management .

Prevent All New Survey Sessions

“Prevent all new survey sessions” stops new respondents from entering the survey once the quota is filled. You can select Show custom inactive survey message to adjust what message is shown to respondents trying to enter the survey.

When Prevent New Survey Session is selected, more fields appear

You can add a new message in this window, or you can create a message in your library with the message type set to Inactive Survey .

Do Not Display a Question

“Do not display question” hides a question of your choice once the quota has been filled. You can hide this question for all respondents or check the box to only hide it for respondents who meet the quota condition.

When Do Not Display a Question is selected and question is specified, option for Only do this if the current respondent meets the quota condition

Do Not Display a Block

“Do not display block” lets you hide a question block once the quota has been filled. You can hide this block for all respondents or check the box to only hide it for those that meet the quota condition.

Do Not Display a Block also has the Only do this if the current respondent meets the quota condition option

None (For Skip Logic and Survey Flow)

If “None” is selected, no preset action will occur when the quota is filled. This option is designed to let you use branch logic, skip logic , or display logic to determine what happens once the quota is filled.

When None is selected, it says This quota can be evaluated with Display Logic or the Survey Flow, or you can choose an Action. This last option is linked and allows you to switch out of None to some other action

The most common use for this setting is in a situation where you need to mimic the behavior of the “End current survey” or “Prevent all new survey sessions,” but would also like to record embedded data in the response.

Example: We’re screening out respondents who meet our demographic conditions once the quota has been filled. Normally, “End current survey” would meet this need, but in this case shown below, we want to record embedded data about the response. Therefore, we’d use “None” as our quota option.

A branch that says if the male quota is met and a respondent says he is male, they go to a screened out embedded data, an exit block, and an end of survey element

Website Feedback Actions

“Website feedback actions” will only be available to a user if they have access to the Website / App Feedback project type. For more information, visit the Survey Quotas section in the Website / App Feedback pages.

A quota. On the right under When the quota has ben met, then... Website / App Feedback is selected and the user is choosing between Activate and Deactivate intercept in the next dropdown menu

Over quota options

In addition to specifying what happens when your quota is met, you also get a chance to specify what should happen to over quota responses. Over quota responses are responses that meet the requirements of your quota after it has been filled. You can either keep and record these responses, or delete them. This allows you to be more efficient when setting up branch logic for your quotas.

A quota with "None: skip logic or survey flow" selected has an extra set of options underneath it that say "over quota options". The options are keep & record or delete

“Over quota options” appear for all possible quota outcomes except “End Current Survey.” This feature is available to any users with access to quotas.

Adding Quotas to Groups

Quota groups let you organize your quotas. They are also important to setting up multiple match handling and public quota dashboards .

To create a quota group, click Add new group .

dropdown next to a quota group expanded

Click a group to start creating quotas inside it.

dropdown next to a quota group

Moving Quotas to Groups

You can move an existing quota into a different group by dragging and dropping quotas into the desired groups.

Dragging a quota into a group

You can also move a quota by doing the following:

  • Click Move to.

new window where you select group and position

  • First in the list
  • Last in the list
  • Before a selected quota in the group
  • After a selected quota in the group Qtip: By default, a respondent will increase the count for all of the quotas they meet the conditions for. However, the order of quotas matters for multiple match handling .
  • Click Confirm .

gray banner along top appears when you select multiple quotas

Quota Group Actions

If you click the dropdown next to a quota group, you can perform the following actions:

  • Move Up / Move Down: Change the order of quota groups in the list. Qtip: You can also drag and drop quotas to reorder them.
  • After a selected quota in the group
  • Public quota dashboard active / inactive: See Public Quota Dashboards .
  • View group dashboard: See Public Quota Dashboards .
  • Multiple match handling: See Multiple Match Handling .
  • Duplicate group: Create a copy of the quota group, with copies of the same quotas inside.
  • Delete group: Delete the group. All quotas in the group will be deleted, too.

Using Quotas in Other Parts of Qualtrics

In addition to specifying what happens to respondents in the quota editor, you can also use quotas in other contexts.

Display logic that says if a quota hasn't been met, a descriptive text question with a coupon code will appear

  • As a way to limit the number of total responses for your survey. See the Limiting Total Responses to a Survey section.

Piped Text for quotas

Qtip: Are you looking for examples of how other Qualtrics users commonly use quotas? Check out these support pages:

  • Panel Company Integration : Guide to setting up your survey for use with an online panel.
  • Appointment / Event Registration Survey : Create a survey where users can sign up for an event or an appointment. Time slots no longer appear to new respondents as they are reserved by other respondents.
  • Create an Anonymized Raffle : Learn how to run a raffle in a Qualtrics survey, using quotas to limit the prize’s distribution.

Limiting Total Responses to a Survey

Quotas can be used to limit the total number of individuals who respond to your survey. This may be helpful if your account has a response limit. In the below example, we set a quota for a survey that can only have 100 responses, regardless of the respondents’ demographics or how they answered the other questions in the survey.

Tile on the left says simple logic quota

  • In the first field of the quota condition , select Question .
  • In the next field, select a question that everybody in the survey will see. It is best to use a question that requires a response or a descriptive text question used for an introduction.
  • Select any answer choice.
  • Select Is Displayed . By using a statement that will always be true, you can ensure the quota with increment correctly. Qtip: Quotas are incremented upon survey submission. Although unlikely, it is possible to over-quota. This may occur if 2 individuals submit their survey at the exact same time.
  • Click Quota options .

Setting quota options

  • Quotas are incremented upon survey submission. Although unlikely, it is possible to over-quota. This may occur if 2 individuals submit their survey at the exact same time. In this event, decide whether you want to record or delete the over quota response.

Creating a Public Quota Dashboard

There may be situations where you wish to allow others to keep track of your quota counts. Public quota dashboards are webpages that display your quotas’ progress. You can link anyone to this page, regardless of whether they have a Qualtrics account.

To create a Public Quota Dashboard

Each group of quotas can have its own public quota dashboard.

Click the dropdown next to a quota group, then select Public quota dashboard inactive .

Dropdown next to quota group reveals public quota dashboard

As soon as this is selected, your public quota dashboard is activated.

Now the public dashboard has a checkmark and says it is active

To access a Public Quota Dashboard

Simple and cross logic quotas are shown in separate sections on the dashboard. Simple logic quotas are displayed as gauge charts, while cross logic quotas are displayed as a table breaking down the counts of each quota segment.

There are 2 versions of your dashboard you can view and share: a version with all of your quota groups and a version with just one group of quotas.

To open an individual quota group’s dashboard,

Quota group clicked and icon in upper-right highlighted

You can also open a version of the dashboard with all of your quota groups inside. Each quota group will be a separate page of the dashboard. To view this,

Tools button

  • Select View public quota dashboard .

public quota dashboard with many pages and gauge charts

To share Public Quota Dashboards

copy linm and download data buttons in upper-right of dashboards

  • Click Copy Link to copy the report’s URL so you can share it. Qtip: This link includes exactly what you see, including all the same report pages and download options.
  • Download CSV / TSV for group: Download data just for the quota group you’ve selected to the left.
  • Download CSV / TSV for all groups: Download data for all of the quota groups included in the dashboard. This option is only available for the public quota dashboard that contains all of your groups.

Using Advanced Quota Options

The following features are only included with Advanced Quotas:

  • Cross Logic Quotas : Use advanced logic based on percentages when setting up your Quota.
  • Multiple Match Handling : Specify how your quotas should be incremented if a respondent meets the conditions of multiple quotas.
  • Scheduled Quotas: You can schedule quotas to automatically reset after a certain period of time. See the To Schedule an Automatic Quota Reset heading under the Setting Quota Counts and Limits

Setting Conditions in Cross Logic Quotas

Unlike a simple logic quota, a cross logic quota uses percentages to define how respondents are distributed in your quota. This quota type is ideal when you have multiple groups of conditions. With cross logic quotas, you simply specify which percentage of respondents should match each criteria, and then Qualtrics will do the heavy lifting of figuring out how many respondents are needed for each combination of conditions.

A massive list of quota conditions broken down by what percentage each choice should be of the overall quota

To create a Cross Logic Quota

New window with two tiles. Cross logic quotas on the right

  • Set the percentages to reflect how you would like the respondents to be distributed. Example: In this case, 50% of the total respondents for this quota will end up being enrolled in the armed forces and 50% will not be enrolled.
  • Select Add Logic Set to create another set of conditions.

New logic set distributing 6 age brackets. The percentages of age brackets add up to 100

  • Add as many conditions and logic sets as you need for your survey.

Qtip: By clicking the dropdown arrow, you can adjust whether you are connecting statements by And , connecting statements by Or , or specifying a percentage .

Dropdown next to a quota percentage allows you to switch to Or or And instead of a percentage, thus separating logic sets

For example, this allows you to build percentages for more dynamic conditions, as seen below.

Two separate percentages, highlighted to show where each ends and begins

Current Respondent Breakdown

At the bottom of your logic is a distribution table showing how many respondents each combination of conditions gives you, based on the total quota limit and the percentages set above.

At the bottom of the cross logic quota is a tables that breaks down the count for each choice by percentage

You can also add custom labels to each cross logic set to make them easier to differentiate. These labels will appear in public quota dashboards .

Click Show quota labels , then click Add custom label next to the group you want to rename.

dropdown next to a cross logic quota

Use the dropdown next to a group to edit or reset the quota’s counts and targets.

Dropdown next to a quota

Setting Up Multiple Match Handling

Sometimes your respondents may meet the conditions for multiple quotas. By default, a respondent will increase the count for all of the quotas they meet the conditions for. Multiple match handling can adjust how Qualtrics handles these multiple-match respondents.

To change how multiple matches are handled

Dropdown arrow next to quota group on leftmost menu. Dropdown menu reveals Multiple Match Handling

  • Click the dropdown arrow to the right of the quota group (not the quota itself).
  • Hover over Multiple Match Handling .
  • Place in all: If a respondent qualifies for multiple quotas, increment all quotas they qualify for.
  • Current defined order: If a respondent qualifies for multiple quotas, use the defined order.
  • Reverse order: If a respondent qualifies for multiple quotas, use the reverse of the defined order.
  • Least filled: If a respondent qualifies for multiple quotas, increment the quota with the lowest filled count.
  • Least filled percent: If a respondent qualifies for multiple quotas, increment the quota with the lowest filled percentage.
  • Most filled: If a respondent qualifies for multiple quotas, increment the quota with the highest filled count.
  • Most filled percent: If a respondent qualifies for multiple quotas, increment the quota with the highest filled percentage.

Troubleshooting Quotas

We’ve designed this section for those who have read the material covered in this page and are still having issues with their quotas. Even though quotas can seem complicated, most problems can be diagnosed by checking a few basic things.

  • Make sure that the current quota count and limit are set to the correct number.
  • The Using Logic page has a comprehensive guide to building conditions. Qtip: Anyone can use this guide. You don’t need access to Advanced Quotas / cross logic quotas to build the condition sets described on the Using Logic page.
  • When working with multiple conditions joined by AND / OR, keep in mind the order of operations .
  • If testing your quotas, remember you can delete test responses to de-increment quotas , and that you can manually reset or edit your quota counts .
  • Double-check your quota actions . If you chose “None,” you need to have additional customization set up elsewhere in the survey before anything will happen.
  • Review branch logic and display logic that use your quotas in their conditions.

Related Articles

User, Group, & Division...

This page details the various user permissions that you are able to enable and disable for each user, group, or division.

COVID-19 Brand Trust Pulse

In this time of intense uncertainty and uncharted territories, what makes a brand trusted today has shifted. Brands need to understand how consumers have reprioritized the drivers of trust, and how their actions during this crisis can positively (or negatively) impact consumers’ trust.

Closing the Loop

Put simply, “closed loop” customer experience is when you are able to respond directly to customer feedback. Whether you run a call center, a customer feedback program, or an IT team, Qualtrics has resources available to create all kinds of closed loop programs. But with so many features at your disposal in XM, it can be hard to decide where to start. This support page will help you set up users, ticketing, and connected surveys and dashboards for your closed loop program in Qualtrics.

Mobile Site Exit Surveys

Exit surveys are surveys that respondents are presented with when they leave your site. To collect overall feedback from your website, we recommend asking questions once you know your website visitors have finished their visit completely and have left the website. This ensures that the feedback you get is about their entire experience, from start to finish. On this page, we’ll cover how to create an exit survey specifically for the mobile experience.

EX25 XM Solution

The EX25 XM Solution is a pre-built foundational engagement program that allows you to holistically measure employee sentiment and feedback to drive continuous action and improvement across the employee experience.

Skip logic allows you to send respondents to a future point in the survey based on how they answer a question. For instance, if a respondent indicates that they don’t agree to your survey’s consent form, they could immediately be skipped to the end of the survey. Please note that skip logic can only be used to send respondents forward in the survey, not backward.

Request Demo

Ready to learn more about Qualtrics?

User, Role, Group, Quota, and Authentication managment

Contributors.

How does Galaxy manage users and groups?

How can I assign Quotas to specific users/groups?

How should I manage groups vs roles

What authentication methods are available?

How is dataset privacy managed?

Authentication Systems, what is available and how can I enable it?

Learn the Galaxy user/group management and assign Quotas.

Understand the Role Based Access Control (RBAC) of Galaxy.

Speaker Notes

User Control

.footnote[.center[options in galaxy.yml ]]

  • These options let you control user login.
  • For example, are anonymous users permitted?
  • Are users able to register themselves?
  • Are users able to purge datasets themselves?
  • All of these are questions you will need to consider.
  • The API allow run as option can be useful if you have an external system submitting jobs to Galaxy on behalf of your users.

User Activation

Require verification that a user’s email is real. You must enable SMTP first.

  • Whenever a user registers, user activation settings control how this process happens
  • If you want to require activation they cannot run tools until they receive the confirmation email
  • If you want to prevent users registering from specific domains

Admin Control

  • In the ansible galaxy training, you set the admin_users variable to define an admin email
  • User impersonation is a very commonly used feature
  • It allows admins to debug issues in their users’ histories
  • A bootstrap api key cannot be used for every task an admin API key can be used for
  • This is because it is not tied to an individual user

User Privacy

  • These options control if the username or email are shown as a dropdown in the sharing menus
  • The option “new user dataset access role default private” is important
  • By default when users share by link, all datasets are public
  • When you set this option, datasets are private, even though the history is shared via link
  • Users will complain when it doesn’t work, and have to be educated to click the appropriate buttons

Roles and Groups

Role Based Access Control (RBAC)

  • create roles (each user automatically has their own ‘private’ role)
  • create groups
  • assign roles to groups
  • assign users to groups
  • assign groups to roles
  • assign users to roles
  • assign permission sets to roles
  • assign permission sets to groups
  • Galaxy uses RBAC for permissions in many places
  • Roles can be created, and assigned permissions
  • Roles and groups behave similarly, grouping users together and granting permissions

Dataset Roles

.left-column50[ manage permissions

  • Users who have associated role on a dataset can manage the roles associated it.
  • Users having associated role can use/view/download a dataset for analysis. Users must have every role associated with a dataset in order to access it

new_user_dataset_access_role_default_private ( galaxy.yml )

  • When this is set, datasets are private by default. ]

User_roles

  • The manage permission controls which accounts can manage permissions of datasets
  • Access permission is those who can see and work with the data
  • These can be controlled in the permissions menu of datasets
  • Or more generally at the history level
  • Users must have every role listed in order to access that dataset
  • This leads to the odd case where users wish to share with multiple groups
  • But by adding more roles, it becomes unavailable to everyone

Library Roles

.left-column50[

  • access library : Restrict access to this library to only users having associated role
  • manage library permissions : Users having associated role can manage roles associated with permissions on this library item
  • add library item : Users having associated role can add library items to this library item

Library_roles

  • Access library permits users with any of the listed roles to access the library
  • No roles means a public library
  • Generally the last three are set to the same values, unless you have complicated requirements
  • In the library management, someone with any subset of the roles listed may make changes
  • This is very different from dataset permission management, where users must have every role

Used to control user disk usage.

Must create quotas in admin interface before any quota will be enforced, otherwise ‘unlimited’

  • Examples: “10000MB”, “99 gb”, “0.2T”, “unlimited”

Default for user class:

  • Unregistered Users
  • Registered Users

or associated with Groups or Users

  • You can enable quotas in your galaxy.yml file
  • When the user has more data than their quotum permits, they are prevented from starting new jobs.
  • many sites setup a “quote increase request” form, to let users request increases for specific, temporary projects

class: left

Quota Details

  • Quotas can be set for Users, or all users of a Group
  • But it is not a “group quota”
  • The quota is applied to individual users
  • Quotas are stored in the DB tables galaxy_user , galaxy_group , and quota
  • Quotas can be set for Users or Groups
  • But it is applied individually, as users may receive multiple quota changes
  • E.g. a user working for two groups, might receive two different quota increases

Quota Automation

  • There is currently no quota automation.
  • Some individuals have written their own quota automation but it is quite ugly ( usegalaxy-eu/quota-sync )
  • Could be nicer with a lot of work
  • Quotas are like group/user management: not managed by files, only within UI/API
  • Quotas can be managed through the API
  • Some people want to automate this process, but it needs more work.

Authentication Systems

  • Galaxy can be configured to use LDAP or Active Directory for authentication
  • There is a config file named config/auth_conf.xml
  • (optional) Galaxy binds with some bind credentials
  • Searches for the user DN
  • Re-binds with the user DN and password
  • If the user is found, they are logged in
  • LDAP and Active Directory can be used as an authentication method
  • This is done through the auth_conf.xml file
  • When the user logs in, the LDAP server is queried for the user

Shibboleth, CAS

  • Many alternative authentication systems are widely used at universities and organisations
  • Galaxy itself does not natively support these systems
  • However, you can use a proxy to authenticate users
  • nginx and apache have modules for both of these methods
  • Shibboleth and CAS are commonly used at some universities
  • While Galaxy does not natively support these, you can use a proxy to authenticate users
  • Nginx and apache both support this
  • Galaxy can be configured to use OpenID Connect for authentication
  • config/oidc_backends_config.xml
  • config/oidc_config.xml
  • LS Login (Elixir AAI)
  • OIDC is a common authentication method
  • There are two configuration files required for this
  • Galaxy supports a variety of providers
  • OIDC means you as an administrator don’t have to worry about validating the account, or storing passwords

Built in Authentication

  • Galaxy has it’s own authentication system
  • Enabled by default
  • There are some options are related to IT security policies
  • Check with your local IT authority for best practices for your organisation
  • Please consider not setting a password expiration period, as NIST recommends against it

Others ( REMOTE_USER )

  • For all other authentication systems
  • If your authentication system provides a username in some secure way to the webserver
  • Then you can use it to authenticate users
  • The webserver must set the REMOTE_USER header
  • Galaxy will trust this header
  • If you use a different authentication system than one previously mentioned
  • and your users are authenticated in some manner through your webserver/proxy
  • Then you can take advantage of REMOTE_USER authentication
  • It is a very simple authentication method, the webserver sets a header, and galaxy implicitly trusts it.

Remote User (Security)

  • If you have local users on the Galaxy head node
  • Then please set remote_user_secret
  • This will send an additional secret header to Galaxy that will be validated
  • Otherwise local users can curl your Galaxy server, and impersonate any user.
  • An important aspect for security is that if you have local users on the Galaxy head node
  • Then you should set the remote_user_secret option, to prevent them impersonating other users
  • Galaxy has a powerful user and group managment system that can be utilized for Quota management.

page logo

Universal source of knowledge

How do you implement user disk quotas?

Table of Contents

  • 1 How do you implement user disk quotas?
  • 2 How do I set user rights?
  • 3 What command is used to set user and group quotas?
  • 4 What is the functionality of assigning quotas to specific projects?
  • 5 Where is quota management?
  • 6 Why do we use quotas on a Windows server?

To implement disk quotas, use the following steps:

  • Enable quotas per file system by modifying the /etc/fstab file.
  • Remount the file system(s).
  • Create the quota database files and generate the disk usage table.
  • Assign quota policies.

How do I set user rights?

You can configure the user rights assignment settings in the following location within the Group Policy Management Console (GPMC) under Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment, or on the local device by using the Local Group Policy Editor (gpedit. msc).

How do I set a quota limit on a folder?

Go to Quota templates 1 and in the central area, right click and click on Create quota template 2. Configure the new quota by indicating the name of the model 1, the limit 2, choose the type (unconditional or conditional) 3 and click OK 4 to create it.

How do you add a quota?

  • Click Storage > Quotas.
  • From the Quotas on SVM list, select the storage virtual machine (SVM) on which you want to create a quota.
  • In the User Defined Quotas tab, click Create.
  • Type or select information as prompted by the wizard.
  • Confirm the details, and then click Finish to complete the wizard.

What command is used to set user and group quotas?

Use the edquota command as shown below, to edit the quota information for a specific user. For example, to change the disk quota for user ‘ramesh’, use edquota command, which will open the soft, hard limit values in an editor as shown below.

What is the functionality of assigning quotas to specific projects?

To prevent system capacities from being exhausted without notification, you can set up quotas. Quotas are operational limits. For example, the number of gigabytes allowed for each project can be controlled so that cloud resources are optimized. Quotas can be enforced at both the project and the project-user level.

What is user rights assignment?

User rights assignments are settings applied to the local device. They allow users to perform various system tasks, such as local logon, remote logon, accessing the server from network, shutting down the server, and so on. Saved user credentials might be compromised if someone else has this privilege.

How do I setup a user access?

Configuring User Access

  • Navigate to Settings > Administration Settings and select Manage User Roles.
  • In the View Role List of list, select Finance .
  • Click Add new role.
  • Enter the Role Name.
  • Optionally, enter a Description for the custom role.
  • Select the permissions that you want to set for the role.
  • Click save.

Where is quota management?

Quota management is a valuable feature that allows you to restrict the storage capacity of shared resources in Windows Server 2016. Step 1: Start by logging into the Windows Server 2016. Then, on the Server Manager’s dashboard, click on “Manage” and select “Add Roles and Features.

Why do we use quotas on a Windows server?

Disk quotas provide a means of controlling and enforcing a user’s ability to save data to a volume. It can be enforced at the user level and restricted on a per-volume basis. Typically, you set a user’s quota and let Windows Server 2003 monitor the user’s disk consumption.

How do you use a quota?

How to set up Quota per volume on Windows 10

  • Open File Explorer (Windows key + E).
  • Click on This PC.
  • Under “Devices and drives,” right-click the drive you want to limit and select Properties.
  • Click on the Quota tab.
  • Click the Show Quota Settings button.
  • Check the Enable quota management option.

What is quota command in Linux?

The Linux quota command displays users’ disk usage and limits. By default, only the user quotas are printed. Quota reports the quotas of all the filesystems listed in /etc/mtab. For NFS-mounted filesystems, a call to the rpc.

  • ← How does Hassan View Amir?
  • Why did European countries establish? →

Privacy Overview

  • Documentation
  • OpenShift Container Platform 4.14 4.13 4.12 4.11 4.10 4.9 4.8 4.7 4.6 4.5 4.4 4.3 4.2 4.1 3.11 3.10 3.9 3.7 3.6 3.5 3.4 3.3 3.2 3.1 3.0
  • Building Applications

Resource quotas per project

  • Learn more about OpenShift Container Platform
  • About OpenShift Kubernetes Engine
  • Kubernetes overview
  • Legal notice
  • OpenShift Container Platform 4.8 release notes
  • Architecture overview
  • Product architecture
  • Installation and update
  • Control plane architecture
  • Understanding OpenShift development
  • Red Hat Enterprise Linux CoreOS
  • Admission plugins
  • Installation overview
  • Selecting an installation method and preparing a cluster
  • Mirroring images for a disconnected installation
  • Preparing to install on AWS
  • Configuring an AWS account
  • Manually creating IAM
  • Installing a cluster quickly on AWS
  • Installing a cluster on AWS with customizations
  • Installing a cluster on AWS with network customizations
  • Installing a cluster on AWS in a restricted network
  • Installing a cluster on AWS into an existing VPC
  • Installing a private cluster on AWS
  • Installing a cluster on AWS into a government or secret region
  • Installing a cluster on AWS using CloudFormation templates
  • Installing a cluster on AWS in a restricted network with user-provisioned infrastructure
  • Uninstalling a cluster on AWS
  • Preparing to install on Azure
  • Configuring an Azure account
  • Installing a cluster quickly on Azure
  • Installing a cluster on Azure with customizations
  • Installing a cluster on Azure with network customizations
  • Installing a cluster on Azure into an existing VNet
  • Installing a private cluster on Azure
  • Installing a cluster on Azure into a government region
  • Installing a cluster on Azure using ARM templates
  • Uninstalling a cluster on Azure
  • Preparing to install on GCP
  • Configuring a GCP project
  • Installing a cluster quickly on GCP
  • Installing a cluster on GCP with customizations
  • Installing a cluster on GCP with network customizations
  • Installing a cluster on GCP in a restricted network
  • Installing a cluster on GCP into an existing VPC
  • Installing a private cluster on GCP
  • Installing a cluster on GCP using Deployment Manager templates
  • Installing a cluster into a shared VPC on GCP using Deployment Manager templates
  • Installing a cluster on GCP in a restricted network with user-provisioned infrastructure
  • Uninstalling a cluster on GCP
  • Preparing to install on bare metal
  • Installing a user-provisioned cluster on bare metal
  • Installing a user-provisioned bare metal cluster with network customizations
  • Installing a user-provisioned bare metal cluster on a restricted network
  • Prerequisites
  • Setting up the environment for an OpenShift installation
  • Post-installation configuration
  • Expanding the cluster
  • Troubleshooting
  • Preparing to install with z/VM on IBM Z and LinuxONE
  • Installing a cluster with z/VM on IBM Z and LinuxONE
  • Restricted network IBM Z installation with z/VM
  • Preparing to install with RHEL KVM on IBM Z and LinuxONE
  • Installing a cluster with RHEL KVM on IBM Z and LinuxONE
  • Restricted network IBM Z installation with RHEL KVM
  • Preparing to install on IBM Power Systems
  • Installing a cluster on IBM Power Systems
  • Restricted network IBM Power Systems installation
  • Preparing to install on OpenStack
  • Installing a cluster on OpenStack with customizations
  • Installing a cluster on OpenStack with Kuryr
  • Installing a cluster that supports SR-IOV compute machines on OpenStack
  • Installing a cluster on OpenStack on your own infrastructure
  • Installing a cluster on OpenStack with Kuryr on your own infrastructure
  • Installing a cluster on OpenStack on your own SR-IOV infrastructure
  • Installing a cluster on OpenStack in a restricted network
  • Uninstalling a cluster on OpenStack
  • Uninstalling a cluster on OpenStack from your own infrastructure
  • Preparing to install on RHV
  • Installing a cluster quickly on RHV
  • Installing a cluster on RHV with customizations
  • Installing a cluster on RHV with user-provisioned infrastructure
  • Installing a cluster on RHV in a restricted network
  • Uninstalling a cluster on RHV
  • Preparing to install on vSphere
  • Installing a cluster on vSphere
  • Installing a cluster on vSphere with customizations
  • Installing a cluster on vSphere with network customizations
  • Installing a cluster on vSphere with user-provisioned infrastructure
  • Installing a cluster on vSphere with user-provisioned infrastructure and network customizations
  • Installing a cluster on vSphere in a restricted network
  • Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure
  • Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure
  • Using the vSphere Problem Detector Operator
  • Preparing to install on VMC
  • Installing a cluster on VMC
  • Installing a cluster on VMC with customizations
  • Installing a cluster on VMC with network customizations
  • Installing a cluster on VMC in a restricted network
  • Installing a cluster on VMC with user-provisioned infrastructure
  • Installing a cluster on VMC with user-provisioned infrastructure and network customizations
  • Installing a cluster on VMC in a restricted network with user-provisioned infrastructure
  • Uninstalling a cluster on VMC
  • Installing a cluster on any platform
  • Customizing nodes
  • Configuring your firewall
  • Validating an installation
  • Troubleshooting installation issues
  • Support for FIPS cryptography
  • Post-installation configuration overview
  • Configuring a private cluster
  • Machine configuration tasks
  • Cluster tasks
  • Network configuration
  • Storage configuration
  • Preparing for users
  • Configuring alert notifications
  • Configuring additional devices in an IBM Z or LinuxONE environment
  • Understanding OpenShift updates
  • Updating clusters overview
  • Understanding upgrade channels
  • Preparing to perform an EUS-to-EUS update
  • Updating a cluster using the web console
  • Updating a cluster using the CLI
  • Performing update using canary rollout strategy
  • Updating a cluster that includes RHEL compute machines
  • About cluster updates in a disconnected environment
  • Mirroring the OpenShift Container Platform image repository
  • Updating a cluster in a disconnected environment using OSUS
  • Updating a cluster in a disconnected environment without OSUS
  • Support overview
  • Managing your cluster resources
  • Getting support
  • About remote health monitoring
  • Showing data collected by remote health monitoring
  • Opting out of remote health reporting
  • Using Insights to identify issues with your cluster
  • Using Insights Operator
  • Using remote health reporting in a restricted network
  • Gathering data about your cluster
  • Summarizing cluster specifications
  • Troubleshooting installations
  • Verifying node health
  • Troubleshooting CRI-O container runtime issues
  • Troubleshooting operating system issues
  • Troubleshooting network issues
  • Troubleshooting Operator issues
  • Investigating pod issues
  • Troubleshooting the Source-to-Image process
  • Troubleshooting storage issues
  • Troubleshooting Windows container workload issues
  • Investigating monitoring issues
  • Diagnosing OpenShift CLI (oc) issues
  • Web console overview
  • Accessing the web console
  • Viewing cluster information
  • Configuring the web console
  • Customizing the web console
  • Web terminal
  • Disabling the web console
  • Creating quick start tutorials
  • CLI tools overview
  • Getting started with the OpenShift CLI
  • Configuring the OpenShift CLI
  • Managing CLI profiles
  • Extending the OpenShift CLI with plugins
  • OpenShift CLI developer command reference
  • OpenShift CLI administrator command reference
  • Usage of oc and kubectl commands
  • odo release notes
  • Understanding odo
  • Installing odo
  • Configuring the odo CLI
  • odo CLI reference
  • Knative CLI (kn) for use with OpenShift Serverless
  • Installing tkn
  • Configuring tkn
  • Basic tkn commands
  • Installing the Operator SDK CLI
  • Operator SDK CLI reference
  • Security and compliance overview
  • Understanding container security
  • Understanding host and VM security
  • Hardening Red Hat Enterprise Linux CoreOS
  • Container image signatures
  • Understanding compliance
  • Securing container content
  • Using container registries securely
  • Securing the build process
  • Deploying containers
  • Securing the container platform
  • Securing networks
  • Securing attached storage
  • Monitoring cluster events and logs
  • Replacing the default ingress certificate
  • Adding API server certificates
  • Securing service traffic using service serving certificates
  • Updating the CA bundle
  • User-provided certificates for the API server
  • Proxy certificates
  • Service CA certificates
  • Node certificates
  • Bootstrap certificates
  • etcd certificates
  • OLM certificates
  • Aggregated API client certificates
  • Machine Config Operator certificates
  • User-provided certificates for default ingress
  • Ingress certificates
  • Monitoring and cluster logging Operator component certificates
  • Control plane certificates
  • Compliance Operator release notes
  • Supported compliance profiles
  • Installing the Compliance Operator
  • Updating the Compliance Operator
  • Compliance Operator scans
  • Understanding the Compliance Operator
  • Managing the Compliance Operator
  • Tailoring the Compliance Operator
  • Retrieving Compliance Operator raw results
  • Managing Compliance Operator remediation
  • Performing advanced Compliance Operator tasks
  • Troubleshooting the Compliance Operator
  • Uninstalling the Compliance Operator
  • Using the oc-compliance plugin
  • Understanding the Custom Resource Definitions
  • File Integrity Operator release notes
  • Installing the File Integrity Operator
  • Updating the File Integrity Operator
  • Understanding the File Integrity Operator
  • Configuring the File Integrity Operator
  • Performing advanced File Integrity Operator tasks
  • Troubleshooting the File Integrity Operator
  • Viewing audit logs
  • Configuring the audit log policy
  • Configuring TLS security profiles
  • Configuring seccomp profiles
  • Allowing JavaScript-based access to the API server from additional hosts
  • Encrypting etcd data
  • Scanning pods for vulnerabilities
  • Authentication and authorization overview
  • Understanding authentication
  • Configuring the internal OAuth server
  • Configuring OAuth clients
  • Managing user-owned OAuth access tokens
  • Understanding identity provider configuration
  • Configuring an htpasswd identity provider
  • Configuring a Keystone identity provider
  • Configuring an LDAP identity provider
  • Configuring a basic authentication identity provider
  • Configuring a request header identity provider
  • Configuring a GitHub or GitHub Enterprise identity provider
  • Configuring a GitLab identity provider
  • Configuring a Google identity provider
  • Configuring an OpenID Connect identity provider
  • Using RBAC to define and apply permissions
  • Removing the kubeadmin user
  • Understanding and creating service accounts
  • Using service accounts in applications
  • Using a service account as an OAuth client
  • Scoping tokens
  • Using bound service account tokens
  • Managing security context constraints
  • Impersonating the system:admin user
  • Syncing LDAP groups
  • About the Cloud Credential Operator
  • Using mint mode
  • Using passthrough mode
  • Using manual mode
  • Using manual mode with STS
  • Understanding networking
  • Accessing hosts
  • Networking Operators overview
  • Understanding the Cluster Network Operator
  • Understanding the DNS Operator
  • Understanding the Ingress Operator
  • Verifying connectivity to an endpoint
  • Configuring the node port service range
  • Configuring IP failover
  • Configuring PTP hardware
  • About network policy
  • Logging network policy
  • Creating a network policy
  • Viewing a network policy
  • Editing a network policy
  • Deleting a network policy
  • Defining a default network policy for projects
  • Configuring multitenant network policy
  • Understanding multiple networks
  • Configuring an additional network
  • About virtual routing and forwarding
  • Configuring multi-network policy
  • Attaching a pod to an additional network
  • Removing a pod from an additional network
  • Editing an additional network
  • Removing an additional network
  • Assigning a secondary network to a VRF
  • About Single Root I/O Virtualization (SR-IOV) hardware networks
  • Installing the SR-IOV Operator
  • Configuring the SR-IOV Operator
  • Configuring an SR-IOV network device
  • Configuring an SR-IOV Ethernet network attachment
  • Configuring an SR-IOV InfiniBand network attachment
  • Adding a pod to an SR-IOV network
  • Using high performance multicast
  • Using DPDK and RDMA
  • Uninstalling the SR-IOV Operator
  • About the OpenShift SDN default CNI network provider
  • Configuring egress IPs for a project
  • Configuring an egress firewall for a project
  • Viewing an egress firewall for a project
  • Editing an egress firewall for a project
  • Removing an egress firewall from a project
  • Considerations for the use of an egress router pod
  • Deploying an egress router pod in redirect mode
  • Deploying an egress router pod in HTTP proxy mode
  • Deploying an egress router pod in DNS proxy mode
  • Configuring an egress router pod destination list from a config map
  • Enabling multicast for a project
  • Disabling multicast for a project
  • Configuring multitenant isolation
  • Configuring kube-proxy
  • About the OVN-Kubernetes network provider
  • Migrating from the OpenShift SDN cluster network provider
  • Rolling back to the OpenShift SDN cluster network provider
  • Converting to IPv4/IPv6 dual stack networking
  • IPsec encryption configuration
  • Configuring an egress IP address
  • Assigning an egress IP address
  • Tracking network flows
  • Configuring hybrid networking
  • Route configuration
  • Secured routes
  • Configuring ExternalIPs for services
  • Configuring ingress cluster traffic using an Ingress Controller
  • Configuring ingress cluster traffic using a load balancer
  • Configuring ingress cluster traffic on AWS using a Network Load Balancer
  • Configuring ingress cluster traffic using a service external IP
  • Configuring ingress cluster traffic using a NodePort
  • About the Kubernetes NMState Operator
  • Observing node network state
  • Updating node network configuration
  • Troubleshooting node network configuration
  • Configuring the cluster-wide proxy
  • Configuring a custom PKI
  • Load balancing on OpenStack
  • Associating secondary interfaces metrics to network attachments
  • Storage overview
  • Understanding ephemeral storage
  • Understanding persistent storage
  • Persistent storage using AWS Elastic Block Store
  • Persistent storage using Azure Disk
  • Persistent storage using Azure File
  • Persistent storage using Cinder
  • Persistent storage using Fibre Channel
  • Persistent storage using FlexVolume
  • Persistent storage using GCE Persistent Disk
  • Persistent storage using hostPath
  • Persistent Storage using iSCSI
  • Persistent storage using local volumes
  • Persistent storage using NFS
  • Persistent storage using Red Hat OpenShift Container Storage
  • Persistent storage using VMware vSphere
  • Configuring CSI volumes
  • CSI inline ephemeral volumes
  • CSI volume snapshots
  • CSI volume cloning
  • CSI automatic migration
  • AWS Elastic Block Store CSI Driver Operator
  • Azure Disk CSI Driver Operator
  • GCP PD CSI Driver Operator
  • OpenStack Cinder CSI Driver Operator
  • OpenStack Manila CSI Driver Operator
  • Red Hat Virtualization CSI Driver Operator
  • VMware vSphere CSI Driver Operator
  • Expanding persistent volumes
  • Dynamic provisioning
  • Registry overview
  • Image Registry Operator in OpenShift Container Platform
  • Configuring the registry for AWS user-provisioned infrastructure
  • Configuring the registry for GCP user-provisioned infrastructure
  • Configuring the registry for OpenStack user-provisioned infrastructure
  • Configuring the registry for Azure user-provisioned infrastructure
  • Configuring the registry for OpenStack
  • Configuring the registry for bare metal
  • Configuring the registry for vSphere
  • Accessing the registry
  • Exposing the registry
  • Operators overview
  • What are Operators?
  • Packaging formats
  • Common terms
  • Concepts and resources
  • Architecture
  • Dependency resolution
  • Operator groups
  • Operator conditions
  • OperatorHub
  • Red Hat-provided Operator catalogs
  • Extending the Kubernetes API with CRDs
  • Managing resources from CRDs
  • Creating applications from installed Operators
  • Installing Operators in your namespace
  • Adding Operators to a cluster
  • Updating installed Operators
  • Deleting Operators from a cluster
  • Configuring proxy support
  • Viewing Operator status
  • Managing Operator conditions
  • Allowing non-cluster administrators to install Operators
  • Managing custom catalogs
  • Using OLM on restricted networks
  • About the Operator SDK
  • Upgrading projects for newer Operator SDK versions
  • Getting started
  • Project layout
  • Ansible support
  • Kubernetes Collection for Ansible
  • Using Ansible inside an Operator
  • Custom resource status management
  • Helm support
  • Defining cluster service versions (CSVs)
  • Working with bundle images
  • Validating Operators using the scorecard
  • Configuring built-in monitoring with Prometheus
  • Configuring leader election
  • Migrating package manifest projects to bundle format
  • Cluster Operators reference
  • CI/CD overview
  • Understanding image builds
  • Understanding build configurations
  • Creating build inputs
  • Managing build output
  • Using build strategies
  • Custom image builds with Buildah
  • Performing basic builds
  • Triggering and modifying builds
  • Performing advanced builds
  • Using Red Hat subscriptions in builds
  • Securing builds by strategy
  • Build configuration resources
  • Troubleshooting builds
  • Setting up additional trusted certificate authorities for builds
  • Migrating from Jenkins to Tekton
  • OpenShift Pipelines release notes
  • Understanding OpenShift Pipelines
  • Installing OpenShift Pipelines
  • Uninstalling OpenShift Pipelines
  • Creating CI/CD solutions for applications using OpenShift Pipelines
  • Working with OpenShift Pipelines using the Developer perspective
  • Reducing resource consumption of OpenShift Pipelines
  • Using pods in a privileged security context
  • Securing webhooks with event listeners
  • Authenticating pipelines using git secret
  • Viewing pipeline logs using the OpenShift Logging Operator
  • OpenShift GitOps release notes
  • Understanding OpenShift GitOps
  • Installing OpenShift GitOps
  • Uninstalling OpenShift GitOps
  • Configuring an OpenShift cluster by deploying an application with cluster configurations
  • Deploying a Spring Boot application with Argo CD
  • Configuring SSO for Argo CD using Dex
  • Configuring SSO for Argo CD using Keycloak
  • Running Control Plane Workloads on Infra nodes
  • Sizing requirements for GitOps Operator
  • Overview of images
  • Configuring the Cluster Samples Operator
  • Using the Cluster Samples Operator with an alternate registry
  • Creating images
  • Managing images overview
  • Tagging images
  • Image pull policy
  • Using image pull secrets
  • Managing image streams
  • Using image streams with Kubernetes resources
  • Triggering updates on image stream changes
  • Image configuration resources
  • Using templates
  • Using Ruby on Rails
  • Using images overview
  • Configuring Jenkins images
  • Jenkins agent
  • Source-to-image
  • Customizing source-to-image images
  • Building Applications overview
  • Working with projects
  • Creating a project as another user
  • Configuring project creation
  • Creating applications using the Developer perspective
  • Creating applications using the CLI
  • Viewing application composition using the Topology view
  • Understanding Helm
  • Installing Helm
  • Configuring custom Helm chart repositories
  • Working with Helm releases
  • Understanding Deployments and DeploymentConfigs
  • Managing deployment processes
  • Using deployment strategies
  • Using route-based deployment strategies
  • Resource quotas across multiple projects
  • Using config maps with applications
  • Monitoring project and application metrics using the Developer perspective
  • Monitoring application health
  • Editing applications
  • Pruning objects to reclaim resources
  • Idling applications
  • Deleting applications
  • Using the Red Hat Marketplace
  • Overview of machine management
  • Creating a machine set on AWS
  • Creating a machine set on Azure
  • Creating a machine set on GCP
  • Creating a machine set on OpenStack
  • Creating a machine set on RHV
  • Creating a machine set on vSphere
  • Manually scaling a machine set
  • Modifying a machine set
  • Deleting a machine
  • Applying autoscaling to a cluster
  • Creating infrastructure machine sets
  • Adding a RHEL compute machine
  • Adding more RHEL compute machines
  • Adding compute machines to user-provisioned infrastructure clusters
  • Adding compute machines to AWS using CloudFormation templates
  • Adding compute machines to vSphere
  • Adding compute machines to bare metal
  • Deploying machine health checks
  • Overview of nodes
  • Viewing pods
  • Configuring a cluster for pods
  • Automatically scaling pods with the horizontal pod autoscaler
  • Automatically adjust pod resource levels with the vertical pod autoscaler
  • Providing sensitive data to pods
  • Creating and using config maps
  • Using Device Manager to make devices available to nodes
  • Including pod priority in pod scheduling decisions
  • Placing pods on specific nodes using node selectors
  • About pod placement using the scheduler
  • Configuring the default scheduler to control pod placement
  • Scheduling pods using a scheduler profile
  • Placing pods relative to other pods using pod affinity and anti-affinity rules
  • Controlling pod placement on nodes using node affinity rules
  • Placing pods onto overcommited nodes
  • Controlling pod placement using node taints
  • Controlling pod placement using pod topology spread constraints
  • Running a custom scheduler
  • Evicting pods using the descheduler
  • Running background tasks on nodes automatically with daemonsets
  • Running tasks in pods using jobs
  • Viewing and listing the nodes in your cluster
  • Working with nodes
  • Managing nodes
  • Managing the maximum number of pods per node
  • Using the Node Tuning Operator
  • Remediating nodes with the Poison Pill Operator
  • Understanding node rebooting
  • Freeing node resources using garbage collection
  • Allocating resources for nodes
  • Allocating specific CPUs for nodes in a cluster
  • Configuring the TLS security profile for the kubelet
  • Machine Config Daemon metrics
  • Creating infrastructure nodes
  • Using containers
  • Using Init Containers to perform tasks before a pod is deployed
  • Using volumes to persist container data
  • Mapping volumes using projected volumes
  • Allowing containers to consume API objects
  • Copying files to or from a container
  • Executing remote commands in a container
  • Using port forwarding to access applications in a container
  • Using sysctls in containers
  • Viewing system event information in a cluster
  • Analyzing cluster resource levels
  • Setting limit ranges
  • Configuring cluster memory to meet container memory and risk requirements
  • Configuring your cluster to place pods on overcommited nodes
  • Enabling features using FeatureGates
  • Using remote worker node at the network edge
  • Red Hat OpenShift support for Windows Containers overview
  • Red Hat OpenShift support for Windows Containers release notes
  • Understanding Windows container workloads
  • Enabling Windows container workloads
  • Creating a Windows MachineSet object on AWS
  • Creating a Windows MachineSet object on Azure
  • Creating a Windows MachineSet object on vSphere
  • Scheduling Windows container workloads
  • Windows node upgrades
  • Using Bring-Your-Own-Host Windows instances as nodes
  • Removing Windows nodes
  • Disabling Windows container workloads
  • OpenShift sanboxed containers release notes
  • Understanding OpenShift sandboxed containers
  • Deploying OpenShift sandboxed containers workloads
  • Uninstalling OpenShift sandboxed containers workloads
  • Upgrade OpenShift sandboxed containers
  • Release notes
  • About Logging
  • Installing Logging
  • About the Cluster Logging custom resource
  • Configuring the logging collector
  • Configuring the log store
  • Configuring the log visualizer
  • Configuring Logging storage
  • Configuring CPU and memory limits for Logging components
  • Using tolerations to control Logging pod placement
  • Moving the Logging resources with node selectors
  • Configuring systemd-journald for Logging
  • Maintenance and support
  • Viewing logs for a specific resource
  • Viewing cluster logs in Kibana
  • Forwarding logs to third party systems
  • Enabling JSON logging
  • Collecting and storing Kubernetes events
  • Updating Logging
  • Viewing cluster dashboards
  • Viewing Logging status
  • Viewing the status of the log store
  • Understanding Logging alerts
  • Collecting logging data for Red Hat Support
  • Troubleshooting for Critical Alerts
  • Uninstalling Logging
  • Exported fields
  • Monitoring overview
  • Configuring the monitoring stack
  • Enabling monitoring for user-defined projects
  • Managing metrics
  • Managing alerts
  • Reviewing monitoring dashboards
  • Accessing third-party UIs
  • Troubleshooting monitoring issues
  • About metering
  • Installing metering
  • Upgrading metering
  • About configuring metering
  • Common configuration options
  • Configuring persistent storage
  • Configuring the Hive metastore
  • Configuring the reporting operator
  • Configuring AWS billing correlation
  • About reports
  • Storage Locations
  • Using metering
  • Examples of using metering
  • Troubleshooting and debugging
  • Uninstalling metering
  • Recommended host practices
  • Recommended host practices for IBM Z & LinuxONE environments
  • Recommended cluster scaling practices
  • Using Cluster Loader
  • Using CPU Manager
  • Using Topology Manager
  • Scaling the Cluster Monitoring Operator
  • The Node Feature Discovery Operator
  • The Driver Toolkit
  • Planning your environment according to object maximums
  • Optimizing storage
  • Optimizing routing
  • Optimizing networking
  • Managing bare metal hosts
  • What huge pages do and how they are consumed by apps
  • Performance Addon Operator for low latency nodes
  • Performing latency tests for platform verification
  • Creating a performance profile
  • Overview of backup and restore operations
  • Shutting down a cluster gracefully
  • Restarting a cluster gracefully
  • OADP features and plugins
  • About installing OADP
  • Installing and configuring OADP with AWS
  • Installing and configuring OADP with Azure
  • Installing and configuring OADP with GCP
  • Installing and configuring OADP with MCG
  • Installing and configuring OADP with OCS
  • Uninstalling OADP
  • Backing up applications
  • Restoring applications
  • Backing up etcd data
  • Replacing an unhealthy etcd member
  • About disaster recovery
  • Restoring to a previous cluster state
  • Recovering from expired control plane certificates
  • Migrating from version 3 to 4 overview
  • About migrating from OpenShift Container Platform 3 to 4
  • Differences between OpenShift Container Platform 3 and 4
  • Network considerations
  • Installing MTC
  • Installing MTC in a restricted network environment
  • Upgrading MTC
  • Premigration checklists
  • Migrating your applications
  • Advanced migration options
  • MTC release notes
  • Understanding API tiers
  • API compatibility guidelines
  • Editing kubelet log level verbosity and gathering logs
  • About Authorization APIs
  • LocalResourceAccessReview [authorization.openshift.io/v1]
  • LocalSubjectAccessReview [authorization.openshift.io/v1]
  • ResourceAccessReview [authorization.openshift.io/v1]
  • SelfSubjectRulesReview [authorization.openshift.io/v1]
  • SubjectAccessReview [authorization.openshift.io/v1]
  • SubjectRulesReview [authorization.openshift.io/v1]
  • TokenReview [authentication.k8s.io/v1]
  • LocalSubjectAccessReview [authorization.k8s.io/v1]
  • SelfSubjectAccessReview [authorization.k8s.io/v1]
  • SelfSubjectRulesReview [authorization.k8s.io/v1]
  • SubjectAccessReview [authorization.k8s.io/v1]
  • About Autoscale APIs
  • ClusterAutoscaler [autoscaling.openshift.io/v1]
  • MachineAutoscaler [autoscaling.openshift.io/v1beta1]
  • HorizontalPodAutoscaler [autoscaling/v1]
  • About Config APIs
  • APIServer [config.openshift.io/v1]
  • Authentication [config.openshift.io/v1]
  • Build [config.openshift.io/v1]
  • ClusterOperator [config.openshift.io/v1]
  • ClusterVersion [config.openshift.io/v1]
  • Console [config.openshift.io/v1]
  • DNS [config.openshift.io/v1]
  • FeatureGate [config.openshift.io/v1]
  • HelmChartRepository [helm.openshift.io/v1beta1]
  • Image [config.openshift.io/v1]
  • Infrastructure [config.openshift.io/v1]
  • Ingress [config.openshift.io/v1]
  • Network [config.openshift.io/v1]
  • OAuth [config.openshift.io/v1]
  • OperatorHub [config.openshift.io/v1]
  • Project [config.openshift.io/v1]
  • Proxy [config.openshift.io/v1]
  • Scheduler [config.openshift.io/v1]
  • About Console APIs
  • ConsoleCLIDownload [console.openshift.io/v1]
  • ConsoleExternalLogLink [console.openshift.io/v1]
  • ConsoleLink [console.openshift.io/v1]
  • ConsoleNotification [console.openshift.io/v1]
  • ConsolePlugin [console.openshift.io/v1alpha1]
  • ConsoleQuickStart [console.openshift.io/v1]
  • ConsoleYAMLSample [console.openshift.io/v1]
  • About Extension APIs
  • APIService [apiregistration.k8s.io/v1]
  • CustomResourceDefinition [apiextensions.k8s.io/v1]
  • MutatingWebhookConfiguration [admissionregistration.k8s.io/v1]
  • ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1]
  • About Image APIs
  • Image [image.openshift.io/v1]
  • ImageSignature [image.openshift.io/v1]
  • ImageStreamImage [image.openshift.io/v1]
  • ImageStreamImport [image.openshift.io/v1]
  • ImageStreamMapping [image.openshift.io/v1]
  • ImageStream [image.openshift.io/v1]
  • ImageStreamTag [image.openshift.io/v1]
  • ImageTag [image.openshift.io/v1]
  • About Machine APIs
  • ContainerRuntimeConfig [machineconfiguration.openshift.io/v1]
  • ControllerConfig [machineconfiguration.openshift.io/v1]
  • KubeletConfig [machineconfiguration.openshift.io/v1]
  • MachineConfigPool [machineconfiguration.openshift.io/v1]
  • MachineConfig [machineconfiguration.openshift.io/v1]
  • MachineHealthCheck [machine.openshift.io/v1beta1]
  • Machine [machine.openshift.io/v1beta1]
  • MachineSet [machine.openshift.io/v1beta1]
  • About Metadata APIs
  • APIRequestCount [apiserver.openshift.io/v1]
  • Binding [core/v1]
  • ComponentStatus [core/v1]
  • ConfigMap [core/v1]
  • ControllerRevision [apps/v1]
  • Event [events.k8s.io/v1]
  • Event [core/v1]
  • Lease [coordination.k8s.io/v1]
  • Namespace [core/v1]
  • About Monitoring APIs
  • Alertmanager [monitoring.coreos.com/v1]
  • AlertmanagerConfig [monitoring.coreos.com/v1alpha1]
  • PodMonitor [monitoring.coreos.com/v1]
  • Probe [monitoring.coreos.com/v1]
  • Prometheus [monitoring.coreos.com/v1]
  • PrometheusRule [monitoring.coreos.com/v1]
  • ServiceMonitor [monitoring.coreos.com/v1]
  • ThanosRuler [monitoring.coreos.com/v1]
  • About Network APIs
  • ClusterNetwork [network.openshift.io/v1]
  • Endpoints [core/v1]
  • EndpointSlice [discovery.k8s.io/v1]
  • EgressNetworkPolicy [network.openshift.io/v1]
  • EgressRouter [network.operator.openshift.io/v1]
  • HostSubnet [network.openshift.io/v1]
  • Ingress [networking.k8s.io/v1]
  • IngressClass [networking.k8s.io/v1]
  • IPPool [whereabouts.cni.cncf.io/v1alpha1]
  • NetNamespace [network.openshift.io/v1]
  • NetworkAttachmentDefinition [k8s.cni.cncf.io/v1]
  • NetworkPolicy [networking.k8s.io/v1]
  • PodNetworkConnectivityCheck [controlplane.operator.openshift.io/v1alpha1]
  • Route [route.openshift.io/v1]
  • Service [core/v1]
  • About Node APIs
  • Node [core/v1]
  • Profile [tuned.openshift.io/v1]
  • RuntimeClass [node.k8s.io/v1]
  • Tuned [tuned.openshift.io/v1]
  • About OAuth APIs
  • OAuthAccessToken [oauth.openshift.io/v1]
  • OAuthAuthorizeToken [oauth.openshift.io/v1]
  • OAuthClientAuthorization [oauth.openshift.io/v1]
  • OAuthClient [oauth.openshift.io/v1]
  • UserOAuthAccessToken [oauth.openshift.io/v1]
  • About Operator APIs
  • Authentication [operator.openshift.io/v1]
  • CloudCredential [operator.openshift.io/v1]
  • ClusterCSIDriver [operator.openshift.io/v1]
  • Console [operator.openshift.io/v1]
  • Config [operator.openshift.io/v1]
  • Config [imageregistry.operator.openshift.io/v1]
  • Config [samples.operator.openshift.io/v1]
  • CSISnapshotController [operator.openshift.io/v1]
  • DNS [operator.openshift.io/v1]
  • DNSRecord [ingress.operator.openshift.io/v1]
  • Etcd [operator.openshift.io/v1]
  • ImageContentSourcePolicy [operator.openshift.io/v1alpha1]
  • ImagePruner [imageregistry.operator.openshift.io/v1]
  • IngressController [operator.openshift.io/v1]
  • KubeAPIServer [operator.openshift.io/v1]
  • KubeControllerManager [operator.openshift.io/v1]
  • KubeScheduler [operator.openshift.io/v1]
  • KubeStorageVersionMigrator [operator.openshift.io/v1]
  • Network [operator.openshift.io/v1]
  • OpenShiftAPIServer [operator.openshift.io/v1]
  • OpenShiftControllerManager [operator.openshift.io/v1]
  • OperatorPKI [network.operator.openshift.io/v1]
  • ServiceCA [operator.openshift.io/v1]
  • Storage [operator.openshift.io/v1]
  • About OperatorHub APIs
  • CatalogSource [operators.coreos.com/v1alpha1]
  • ClusterServiceVersion [operators.coreos.com/v1alpha1]
  • InstallPlan [operators.coreos.com/v1alpha1]
  • Operator [operators.coreos.com/v1]
  • OperatorCondition [operators.coreos.com/v1]
  • OperatorGroup [operators.coreos.com/v1]
  • PackageManifest [packages.operators.coreos.com/v1]
  • Subscription [operators.coreos.com/v1alpha1]
  • About Policy APIs
  • PodDisruptionBudget [policy/v1]
  • About Project APIs
  • Project [project.openshift.io/v1]
  • ProjectRequest [project.openshift.io/v1]
  • About Provisioning APIs
  • BareMetalHost [metal3.io/v1alpha1]
  • Provisioning [metal3.io/v1alpha1]
  • About RBAC APIs
  • ClusterRoleBinding [rbac.authorization.k8s.io/v1]
  • ClusterRole [rbac.authorization.k8s.io/v1]
  • RoleBinding [rbac.authorization.k8s.io/v1]
  • Role [rbac.authorization.k8s.io/v1]
  • About Role APIs
  • ClusterRoleBinding [authorization.openshift.io/v1]
  • ClusterRole [authorization.openshift.io/v1]
  • RoleBindingRestriction [authorization.openshift.io/v1]
  • RoleBinding [authorization.openshift.io/v1]
  • Role [authorization.openshift.io/v1]
  • About Schedule and quota APIs
  • AppliedClusterResourceQuota [quota.openshift.io/v1]
  • ClusterResourceQuota [quota.openshift.io/v1]
  • FlowSchema [flowcontrol.apiserver.k8s.io/v1beta1]
  • LimitRange [core/v1]
  • PriorityClass [scheduling.k8s.io/v1]
  • PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta1]
  • ResourceQuota [core/v1]
  • About Security APIs
  • CertificateSigningRequest [certificates.k8s.io/v1]
  • CredentialsRequest [cloudcredential.openshift.io/v1]
  • PodSecurityPolicyReview [security.openshift.io/v1]
  • PodSecurityPolicySelfSubjectReview [security.openshift.io/v1]
  • PodSecurityPolicySubjectReview [security.openshift.io/v1]
  • RangeAllocation [security.openshift.io/v1]
  • Secret [core/v1]
  • SecurityContextConstraints [security.openshift.io/v1]
  • ServiceAccount [core/v1]
  • About Storage APIs
  • CSIDriver [storage.k8s.io/v1]
  • CSINode [storage.k8s.io/v1]
  • CSIStorageCapacity [storage.k8s.io/v1beta1]
  • PersistentVolumeClaim [core/v1]
  • StorageClass [storage.k8s.io/v1]
  • StorageState [migration.k8s.io/v1alpha1]
  • StorageVersionMigration [migration.k8s.io/v1alpha1]
  • VolumeAttachment [storage.k8s.io/v1]
  • VolumeSnapshot [snapshot.storage.k8s.io/v1]
  • VolumeSnapshotClass [snapshot.storage.k8s.io/v1]
  • VolumeSnapshotContent [snapshot.storage.k8s.io/v1]
  • About Template APIs
  • BrokerTemplateInstance [template.openshift.io/v1]
  • PodTemplate [core/v1]
  • Template [template.openshift.io/v1]
  • TemplateInstance [template.openshift.io/v1]
  • About User and group APIs
  • Group [user.openshift.io/v1]
  • Identity [user.openshift.io/v1]
  • UserIdentityMapping [user.openshift.io/v1]
  • User [user.openshift.io/v1]
  • About Workloads APIs
  • BuildConfig [build.openshift.io/v1]
  • Build [build.openshift.io/v1]
  • CronJob [batch/v1]
  • DaemonSet [apps/v1]
  • Deployment [apps/v1]
  • DeploymentConfig [apps.openshift.io/v1]
  • Job [batch/v1]
  • Pod [core/v1]
  • ReplicationController [core/v1]
  • PersistentVolume [core/v1]
  • ReplicaSet [apps/v1]
  • StatefulSet [apps/v1]
  • About OpenShift Service Mesh
  • Service Mesh 2.x release notes
  • Service Mesh architecture
  • Service Mesh deployment models
  • Service Mesh and Istio differences
  • Preparing to install Service Mesh
  • Installing the Operators
  • Creating the ServiceMeshControlPlane
  • Adding workloads to a service mesh
  • Enabling sidecar injection
  • Upgrading Service Mesh
  • Managing users and profiles
  • Traffic management
  • Metrics, logs, and traces
  • Performance and scalability
  • Deploying to production
  • 3scale WebAssembly for 2.1
  • 3scale Istio adapter for 2.0
  • Troubleshooting Service Mesh
  • Control plane configuration reference
  • Kiali configuration reference
  • Jaeger configuration reference
  • Uninstalling Service Mesh
  • Service Mesh 1.x release notes
  • Installing Service Mesh
  • Deploying applications on Service Mesh
  • Data visualization and observability
  • Custom resources
  • 3scale Istio adapter for 1.x
  • Removing Service Mesh
  • Distributed tracing release notes
  • Distributed tracing architecture
  • Installing distributed tracing
  • Configuring the distributed tracing platform
  • Configuring distributed tracing data collection
  • Upgrading distributed tracing
  • Removing distributed tracing
  • About OpenShift Virtualization
  • Start here with OpenShift Virtualization
  • OpenShift Virtualization release notes
  • Preparing your cluster for OpenShift Virtualization
  • Specifying nodes for OpenShift Virtualization components
  • Installing OpenShift Virtualization using the web console
  • Installing OpenShift Virtualization using the CLI
  • Installing the virtctl client
  • Uninstalling OpenShift Virtualization using the web console
  • Uninstalling OpenShift Virtualization using the CLI
  • Upgrading OpenShift Virtualization
  • Additional security privileges granted for kubevirt-controller and virt-launcher
  • Using the CLI tools
  • Creating virtual machines
  • Editing virtual machines
  • Editing boot order
  • Deleting virtual machines
  • Managing virtual machine instances
  • Controlling virtual machine states
  • Accessing virtual machine consoles
  • Triggering virtual machine failover by resolving a failed node
  • Installing the QEMU guest agent on virtual machines
  • Viewing the QEMU guest agent information for virtual machines
  • Managing config maps, secrets, and service accounts in virtual machines
  • Installing VirtIO driver on an existing Windows virtual machine
  • Installing VirtIO driver on a new Windows virtual machine
  • Working with resource quotas for virtual machines
  • Specifying nodes for virtual machines
  • Configuring certificate rotation
  • Automating management tasks
  • EFI mode for virtual machines
  • Configuring PXE booting for virtual machines
  • Managing guest memory
  • Using huge pages with virtual machines
  • Enabling dedicated resources for a virtual machine
  • Scheduling virtual machines
  • Configuring PCI passthrough
  • Configuring a watchdog device
  • TLS certificates for data volume imports
  • Importing virtual machine images with data volumes
  • Importing virtual machine images into block storage with data volumes
  • Importing a Red Hat Virtualization virtual machine
  • Importing a VMware virtual machine or template
  • Enabling user permissions to clone data volumes across namespaces
  • Cloning a virtual machine disk into a new data volume
  • Cloning a virtual machine by using a data volume template
  • Cloning a virtual machine disk into a new block storage data volume
  • Configuring the virtual machine for the default pod network
  • Creating a service to expose a virtual machine
  • Attaching a virtual machine to a Linux bridge network
  • Configuring IP addresses for virtual machines
  • Configuring an SR-IOV network device for virtual machines
  • Defining an SR-IOV network
  • Attaching a virtual machine to an SR-IOV network
  • Viewing the IP address of NICs on a virtual machine
  • Using a MAC address pool for virtual machines
  • Features for storage
  • Configuring local storage for virtual machines
  • Creating data volumes
  • Reserving PVC space for file system overhead
  • Configuring CDI to work with namespaces that have a compute resource quota
  • Managing data volume annotations
  • Using preallocation for data volumes
  • Uploading local disk images by using the web console
  • Uploading local disk images by using the virtctl tool
  • Uploading a local disk image to a block storage data volume
  • Managing offline virtual machine snapshots
  • Moving a local virtual machine disk to a different node
  • Expanding virtual storage by adding blank disk images
  • Cloning a data volume using smart-cloning
  • Creating and using boot sources
  • Hot-plugging virtual disks
  • Using container disks with virtual machines
  • Preparing CDI scratch space
  • Re-using statically provisioned persistent volumes
  • Deleting data volumes
  • Creating virtual machine templates
  • Editing a virtual machine template
  • Enabling dedicated resources for a virtual machine template
  • Deleting a virtual machine template
  • Virtual machine live migration
  • Live migration limits and timeouts
  • Migrating a virtual machine instance to another node
  • Monitoring live migration of a virtual machine instance
  • Cancelling the live migration of a virtual machine instance
  • Configuring virtual machine eviction strategy
  • About node maintenance
  • Setting a node to maintenance mode
  • Resuming a node from maintenance mode
  • Automatic renewal of TLS certificates
  • Managing node labeling for obsolete CPU models
  • Preventing node reconciliation
  • Viewing logs
  • Viewing events
  • Diagnosing data volumes using events and conditions
  • Viewing information about virtual machine workloads
  • Monitoring virtual machine health
  • OpenShift cluster monitoring, logging, and Telemetry
  • Prometheus queries for virtual resources
  • Collecting data for Red Hat Support
  • About OpenShift Serverless
  • About OpenShift Serverless Functions
  • Event sources
  • Channels and subscriptions
  • Installing the OpenShift Serverless Operator
  • Installing Knative Serving
  • Installing Knative Eventing
  • Removing OpenShift Serverless
  • Installing the Knative CLI
  • Configuring the Knative CLI
  • Knative CLI plugins
  • Knative Serving CLI commands
  • Knative Eventing CLI commands
  • Functions commands
  • Serverless applications
  • Autoscaling
  • Event sinks
  • Event delivery
  • Listing event sources and event source types
  • Creating an API server source
  • Creating a ping source
  • Custom event sources
  • Creating channels
  • Creating and managing subscriptions
  • Creating brokers
  • Using Knative Kafka
  • Global configuration
  • Configuring Knative Kafka
  • Serverless components in the Administrator perspective
  • Integrating Service Mesh with OpenShift Serverless
  • Serverless administrator metrics
  • Using metering with OpenShift Serverless
  • High availability
  • Cluster logging with OpenShift Serverless
  • Serverless developer metrics
  • Configuring TLS authentication
  • Configuring JSON Web Token authentication for Knative services
  • Configuring a custom domain for a Knative service
  • Setting up OpenShift Serverless Functions
  • Getting started with functions
  • On-cluster function building and deploying
  • Developing Quarkus functions
  • Developing Node.js functions
  • Developing TypeScript functions
  • Using functions with Knative Eventing
  • Function project configuration in func.yaml
  • Accessing secrets and config maps from functions
  • Adding annotations to functions
  • Functions development reference guide
  • Integrating Serverless with the cost management service
  • Using NVIDIA GPU resources with serverless applications

Resources managed by quotas

Quota scopes, quota enforcement, requests versus limits, sample resource quota definitions, creating object count quotas, setting resource quota for extended resources, viewing a quota, configuring explicit resource quotas.

A resource quota , defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that project.

This guide describes how resource quotas work, how cluster administrators can set and manage resource quotas on a per project basis, and how developers and cluster administrators can view them.

The following describes the set of compute resources and object types that can be managed by a quota.

Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes.

Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error.

A BestEffort scope restricts a quota to limiting the following resources:

A Terminating , NotTerminating , and NotBestEffort scope restricts a quota to tracking the following resources:

requests.memory

limits.memory

requests.cpu

After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics.

After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource.

When you delete a resource, your quota use is decremented during the next full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value.

If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system.

When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values.

If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources.

Creating a quota

You can create a quota to constrain resource usage in a given project.

Define the quota in a file.

Use the file to create the quota and apply it to a project:

For example:

You can create an object count quota for all standard namespaced resource types on OpenShift Container Platform, such as BuildConfig and DeploymentConfig objects. An object quota count places a defined quota on all standard namespaced resource types.

When using a resource quota, an object is charged against the quota upon creation. These types of quotas are useful to protect against exhaustion of resources. The quota can only be created if there are enough spare resources within the project.

To configure an object count quota for a resource:

Run the following command:

This example limits the listed resources to the hard limit in each project in the cluster.

Verify that the quota was created:

Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. is allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu .

Determine how many GPUs are available on a node in your cluster. For example:

In this example, 2 GPUs are available.

Set a quota in the namespace nvidia . In this example, the quota is 1 :

Create the quota:

Verify that the namespace has the correct quota set:

Define a pod that asks for a single GPU. The following example definition file is called gpu-pod.yaml :

Create the pod:

Verify that the pod is running:

Verify that the quota Used counter is correct:

Attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs:

This Forbidden error message is expected because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota.

You can view usage statistics related to any hard limits defined in a project’s quota by navigating in the web console to the project’s Quota page.

You can also use the CLI to view quota details.

Get the list of quotas defined in the project. For example, for a project called demoproject :

Describe the quota you are interested in, for example the core-object-counts quota:

Configure explicit resource quotas in a project request template to apply specific resource quotas in new projects.

Access to the cluster as a user with the cluster-admin role.

Install the OpenShift CLI ( oc ).

Add a resource quota definition to a project request template:

If a project request template does not exist in a cluster:

Create a bootstrap project template and output it to a file called template.yaml :

Add a resource quota definition to template.yaml . The following example defines a resource quota named 'storage-consumption'. The definition must be added before the parameters: section in the template:

Create a project request template from the modified template.yaml file in the openshift-config namespace:

By default, the template is called project-request .

If a project request template already exists within a cluster:

List templates in the openshift-config namespace:

Edit an existing project request template:

Add a resource quota definition, such as the preceding storage-consumption example, into the existing template. The definition must be added before the parameters: section in the template.

If you created a project request template, reference it in the cluster’s project configuration resource:

Access the project configuration resource for editing:

By using the web console:

Navigate to the Administration → Cluster Settings page.

Click Global Configuration to view all configuration resources.

Find the entry for Project and click Edit YAML .

By using the CLI:

Edit the project.config.openshift.io/cluster resource:

Update the spec section of the project configuration resource to include the projectRequestTemplate and name parameters. The following example references the default project request template name project-request :

Verify that the resource quota is applied when projects are created:

Create a project:

List the project’s resource quotas:

Describe the resource quota in detail:

IMAGES

  1. How to Set a Sales Quota in 6 Steps [+ Examples]

    what is the functionality of assigning quotas to specific projects

  2. What is a Sales Quota & How to Achieve it [4 Sales Expert Tips Inside]

    what is the functionality of assigning quotas to specific projects

  3. Assigning Balanced Concepts in Quotas with Multiple Variables

    what is the functionality of assigning quotas to specific projects

  4. How Resource Quotas Work in Rancher Projects

    what is the functionality of assigning quotas to specific projects

  5. 6 Types of Sales Quota to Give to Your Team for Better Results

    what is the functionality of assigning quotas to specific projects

  6. How To Set Accurate Sales Quotas Webinar Slides

    what is the functionality of assigning quotas to specific projects

VIDEO

  1. Rhino to Revit

  2. BUSINESS DATA ANALYTICS APRIL 2023 Q23 -CONSOLIDATION

  3. HOW

  4. Assigning Accounts to specific networks

  5. Policy Iteration (tutorial)

  6. Why Customization and Extensions

COMMENTS

  1. Module 7 Live Virtual Machine Lab 7-1: Securing a Cloud Solution

    Study with Quizlet and memorize flashcards containing terms like True or False. Creating a Project in the OpenStack Dashboard gives the functionality of assigning specific users access to only specific instances., What is the functionality of assigning quotas to specific Projects?, The least privileged Role that can be assigned to a user in the OpenStack Dashboard, when a user is created is ...

  2. Quota project overview

    The quota project set in the environment or in the request. If you use an API key to provide credentials for a request, the project associated with the API key is used as the quota...

  3. Resource quotas per project

    A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that project.

  4. View and manage quotas

    Quotas can be enforced at both the project and the project-user level. Typically, you change quotas when a project needs more than ten volumes or 1 TB on a compute node. Using the Dashboard, you can view default Compute and Block Storage quotas for new projects, as well as update quotas for existing projects. Note

  5. How to programmatically manage quotas in Google Cloud

    Quotas can be used to limit the resources a project/organization is authorized to consume. From the type and number of Compute Engine CPUs, to the maximum number of requests made to an API over...

  6. How Resource Quotas Work in Rancher Projects

    In Rancher, you apply a resource quota to the project, and then the quota propagates to each namespace, whereafter Kubernetes enforces your limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can override it. The resource quota includes two limits, which you set while creating or ...

  7. Set the quota project

    The quota project set in the environment or in the request. If you use an API key to provide credentials for a request, the project associated with the API key is used as the quota...

  8. Quotas

    Step 3: Configuring Project Participants & Distributing Your Project; Step 4: Reporting on Your Employee Engagement Project Results; Step 5: Closing Your Project & Preparing for Next Year's Project; New Dashboards Experience

  9. Cloud 7 and prep Flashcards

    What is the functionality of assigning quotas to specific Projects? It limits the number of resources a Project can utilize. The least privileged Role that can be assigned to a user in the OpenStack Dashboard, when a user is created is :

  10. Solved 1. True or False. Creating a Project in the

    What is the functionality of assigning quotas to specific Projects?It limits user access to the specific instanceIt prevents unauthorized access to the ProjectIt assigns a Security Group to the 1. True or False. Creating a Project in the OpenStack Dashboard gives the functionality of assigning specific users access to only specific instances. TRUE

  11. User, Role, Group, Quota, and Authentication managment

    Quota Details. Quotas can be set for Users, or all users of a Group; But it is not a "group quota" The quota is applied to individual users; Storage. Quotas are stored in the DB tables galaxy_user, galaxy_group, and quota; Speaker Notes. Quotas can be set for Users or Groups; But it is applied individually, as users may receive multiple ...

  12. Resource quotas across multiple projects

    The ResourceQuotaSpec object that will be enforced over the selected projects. 2: A simple key-value selector for annotations. 3: A label selector that can be used to select projects. 4: A per-namespace map that describes current quota usage in each selected project. 5: The aggregate usage across all selected projects.

  13. How to assign and track Quota? : Freshsales

    Go to Deals >> Quotas and Forecasting. Note: To be able to create and assign Quota to others, you will need 'View', 'Create', and 'Edit' permissions enabled and 'All goals' scope enabled. Click. This opens an 'ADD QUOTA' overlay where you can configure quota for yourself or your team members. Add quota details.

  14. Resource quotas across multiple projects

    When creating quotas, you can select multiple projects based on annotation selection, label selection, or both. Procedure To select projects based on annotations, run the following command: $ oc create clusterquota for-user \ --project-annotation-selector openshift.io/requester = <user_name> \ --hard pods= 10 \ --hard secrets= 20

  15. Quotas in projects that use sample management

    When a project uses sample management, quota control can be based on sample data, questionnaire data, or a combination of the two. In fact, you can define dependent quotas where some of the characteristics come from the sample and others come from the questionnaire. ... Additional functionality with sample management-based quota control. As ...

  16. How do you implement user disk quotas?

    What is the functionality of assigning quotas to specific projects? To prevent system capacities from being exhausted without notification, you can set up quotas. Quotas are operational limits. For example, the number of gigabytes allowed for each project can be controlled so that cloud resources are optimized.

  17. Google Cloud Module 2

    C Use Cloud Functions to fire off an email with daily budget totals. D Enable a script using cron to kick off when a threshold is ... enforces quotas on resource usage, setting a hard limit on how much of a particular GCP resource a project can use. Quotas are designed to help prevent billing surprises and to prevent overconsumption of ...

  18. Assigning Balanced Concepts in Quotas with Multiple Variables

    While the Survey Editor contains a default option to balance quotas, balancing concepts across multiple survey variables can become quite complicated. For example, imagine that you wanted to employ the quota scheme illustrated below, balancing your survey concepts based on gender, age, and ethnicity, while still maintaining a good balance overall.

  19. Resource quotas per project

    Resource quotas per project. A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that ...

  20. PDF Quotas for Projects A Proposed New Feature

    Since "Quotas of Projects" is complex, we are working on make a prototype version of "Project Quota" feature. We re-used "subtree" quota patches into EXT4 to support project/directory Quota in the backend filesystem Adding OST pool id and use pool id/name as project id/name • Simplest implementation, but less compatibility • Full ...

  21. What is the functionality of assigning quotas to specific Projects?

    What is the functionality of assigning quotas to specific Projects? It prevents unauthorized access to the Project It limits the number of resources a Project can utilize It limits user access to the specific instance It assigns a Security Group to the instance ENGINEERING & TECHNOLOGY COMPUTER SCIENCE Answer & Explanation Solved by AI

  22. Cloud Infrastructure Midterm Flashcards

    Study with Quizlet and memorize flashcards containing terms like Which of the following would be considered a host compute resource? A. Cores B. Power supply C. Processor D. Bandwidth, Quotas are a mechanism for enforcing what? A. Limits B. Rules C. Access restrictions D. Virtualization, How are quotas defined? A. By management systems B. According to service level agreements that are defined ...

  23. Testout Ch 7.7-7.12 Flashcards

    Definition 1 / 39 quota Click the card to flip 👆 Flashcards Learn Test Match Created by makbro890 Terms in this set (39) What is the name of the package that must be installed if you want to use disk quotas on your Linux system? (typed) quota Which command should you enter at the command prompt to change the quota settings for the gshant user?