This is Carrie Ballinger, and welcome to Teradata TechBytes. I’d like to spend next few minutes with you talking about the Teradata Integrated Workload Management offering. This is your basic bundling of Workload Management. So let’s get started looking at the basic pieces of the Teradata Integrated Workload Management offering. The most important component in Teradata Workload Management is the workload itself. And this workload represents one of the several different types of work that’s going to be running in the platform. You’ll probably define many workloads on your system that could be anywhere from 6, to 8, to 10, to 12, or more. When a query enters the Teradata system, it goes to the parsing and optimizing in the Optimizer. When it’s ready to execute in the AMPs, the Workload Management software knows what query the characteristics are because it can see the query plan. And it also knows information from the session logon such as the user ID, or the account, or the profile. This information is used to map that query to one of the existing workloads that has been defined in the system and that mapping of a query to a workload is done by means of classification criteria. Classification criteria applies not only to the workloads but also to some of the optional components in Teradata Integrated Workload Management, such as the filters and the throttles. I’m going to talk about all these components in a little bit. But the main point I want to make in classification criteria is you have a lot of choices. These are the three main groupings of classification criteria in Teradata Workload Management. We’ve got what I call “WHO” criteria which is the information that’s known at session logon time – like the user, or even the IP address of the session is originating from. And then we have “WHERE” criteria which is the database objects that this particular query is going to be touching or operating on – things like views, databases, even stored procedures. One of the new types of where criteria is the QueryGrid server. You can, for example, if you’re accessing a foreign server you can use the name of that foreign server as the classification criteria for a workload. And then, any queries that enter the system that will be accessing that particular foreign server can map to a single workload or a single system throttle, for example. The third classification criteria are the “WHAT” criteria, and this is basically the query characteristics – things like whether it’s an all AMP request or not, whether there is DDL going on, or if it’s a collect statistics statement. And one of the most popular of these what criteria is estimated processing time which is used by many Teradata sites today as a way to classify shorter queries to one workload, usually a higher priority, and then longer queries to a different workload. The system filter rule is an option – this is not widely used – it’s not widely used because it can be harsh at times. It’s a pass / fail type of rule. Any query entering the system that matches the classification of a filter rule will be rejected and will not be allowed start running. In this example on this slide, we’ve got a filter rule that says no table scans on the Call Detail Record table between the hours of 8 in the morning and 5 at night. So any query that comes into the system, if this rule was active, that attempts to do a full table scan on the CDR table will be rejected. This could be useful if you want to make sure that only indexed access or a partition elimination type activity happens against a really large table. This is reported back in DBQL as well as the workload management logs, so the DBA knows how many of these rejections have taken place, and the end user gets a message back saying what system filter rule was he or she violated. You could also put system filters in warning mode which means the queries are checked against the criteria that’s in the rule but never rejected – They are simply reported as being in violation of the rule. So that’s actually is a good way of finding out who is issuing these queries and may be talking to them before you actually reject the queries. And the second option I want to mention is the system throttle. Throttles in Teradata Workload Management are ones of the more popular options and these are concurrency control options. The system throttle comes with classification criteria that the administrator decides and any query entering the system that matches to the classification criteria will be controlled by that system throttle. When you define a throttle, you specify a limit which is the concurrency limit. When a query enters a system matches the classification of that throttle rule, the counter that’s been kept in a database will be compared to the limit that’s defined in the rule and if the counter of queries that is under the control of that throttle rule is equal to the limit then the query is, under normal circumstances, placed in the delayed queue which is first in first out. The queries could be rejected – you have the option with the throttle to either delay or reject – but to be honest with you, I’ve never seen anyone using reject option on a throttle rule. So for most cases you can assume that when the counter is at the limit, any new queries will go into that delay queue. There are several different types of throttle available in Teradata Integrated Workload Management. We just talked about the system throttle. There are also utility throttles that function in a similar way, but they are exclusively for the load and the fast export type of utilities. We also have workload throttles. They do not come with classification criteria because they are attached to a workload itself. And the workload has classification criteria. So if you add a throttle to a workload, then any query classifying to that workload will be managed by that throttle. One of the new options in terms of throttles is the group throttle, which is a higher–level throttle above multiple workload throttles, and provides a limit that takes into account several different workload throttles. And that limit may be less than the total combined workload throttles as I am showing you here in this diagram. One of the new features is the ability to prioritize the delay queue. What prioritizing the delay queue does is – it sorts the entries, all the queries that are waiting to run that have been placed in the delayed queue. It orders them by their priority scheduler priority so that the higher priority requests are always guaranteed to be released from delay queue ahead of any lower priority requests. And as this example shows you, the default is the traditional first in first out ordering where the query that arrives at 9:00 o’clock may be a low priority query but it’s first to arrive – It’s going to be first to be released when it’s able to run. With the option of prioritizing the delay queue the highest priority no matter what time it arrived is going to be the first query to be released if its particular throttle’s counter is below that throttle’s limit. So you have to be a candidate to be released in addition to being the highest priority among those that are releasable at that point in time.