2 What's new?
production) often have environment-specific settings such as database names, schema names, and Replicate
task names. Variables allow you to easily move projects between different environments without needing to
manually configure the settings for each environment. This is especially useful if many settings are different
between environments. For each project, you can use the predefined environment variables or create your
own environment variables.
Excluding environment variables from export operations
An option has been added to replace environment-specific settings with the defaults when exporting projects
(CLI) or creating deployment packages.
To facilitate this functionality, the --without_environment_specifics parameter was added to the
export_project_repository CLI and a Exclude environment variable values option was added to the
Create Deployment Package dialog.
Support for data profiling and data quality rules when using Google
Cloud BigQuery
You can now configure data profiling and data quality rules when using Google Cloud BigQuery as a data
warehouse.
Attributes case sensitivity support
In previous versions, attempting to create several Attributes with the same name but a different case would
result in a duplication error. Now, such attributes will now be created with an integer suffix that increases
incrementally for each attribute added with the same name. For example: Sales, SALES_01, and Sales_02.
Associating a Replicate task that writes to a Hadoop target
You can now associate a Replicate task that writes to a Hadoop target with the Compose landing.
Performance improvements
This version provides the following performance improvements:
l
Validating a model with self-referencing entities is now significantly faster than in previous versions.
For instance, it now takes less than a minute (instead of up to two hours) to validate a model with 5500
entities.
l
The time it takes to "Adjust" the data warehouse has been significantly reduced. For instance, it now
takes less than three minutes (instead of up to two hours) to adjust a data warehouse with 5500
entities.
l
Optimized queries, resulting in significantly improved data warehouse loading and CDC performance.
l
Significantly improved the loading speed of data mart Type 2 dimensions with more than two entities.
In order to benefit from this improvement, customers upgrading with existing data marts needs to
regenerate their data mart ETLs.
l
Improved performance of data warehouse loading, by reducing statements executed when there is no
data to process. This change impacts cloud data warehouses such as Snowflake, Amazon Redshift,
Release Notes - Qlik Compose, May 2022 10