Optimization proposals overview", Collapse section "13. To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled to the checkpoint connector configuration, and setting the property to true. Perform the following steps to create a mirror. If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput. Start ZooKeeper and Kafka in the target cluster: For the target cluster, verify that the topics are being replicated: Each Kafka cluster is identified with its alias. Configuring OAuth 2.0 authentication", Collapse section "4.10.6. Kafka MirrorMaker 1 will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. tasks.max configuration for a MirrorMaker connector. Open a new Terminal window and connect to a broker of the source cluster: 6. If you want to remove that exception you just need to define the connect-log4j.properties file position in an environment variable as follow. After the downtime period, even if topic1 (inside clusterB) will have remained misaligned with respect to topic1 (inside clusterA), it can be realigned starting from clusterA.topic1 which will be automatically replicated as soon as the clusterB will come back online. The name of the originating cluster is prepended to the name of the topic. Using OAuth 2.0 token-based authentication", Collapse section "4.10. MirrorMaker 2.0 uses Kafka Connect to make the connections to transfer data between clusters. Running Kafka Connect in standalone mode, 7.2.1. Open a Terminal window and go to the root of the repository, then start the test environment by running: At the end of the process you should see something like: 3. Generating reassignment JSON files, 6.3.2.3. The flush pipeline for data replication is source topic (Kafka Connect) source message queue producer buffer target topic. Users that can read a source topic can read its remote equivalent.
10. Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. We share contents around NoSQL, Graph Technologies and stream processing platform, A selection of the most useful services for working with ASO, Filling Out Custom PDF Forms with FileMakerLuminFire, Hive BP November UpdateImproving our Products, Survival Tricks for Someone Entering Code Bootcamp, How to test the integration between Google Cloud Storage and Spring Boot using Testcontainers, Newsletter #4: Googles AlloyDB Database, History of AWS S3, Refactor from Monolith Workflow to Micro-Workflows, connect-mirror-maker /tmp/kafka/config/mm2.properties, export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/etc/kafka/connect-log4j.properties", kafka-topics --bootstrap-server broker1A:29092 --create --topic topic1 --partitions 3 --replication-factor 3, kafka-topics --bootstrap-server broker1B:29093 --create --topic topic1 --partitions 3 --replication-factor 3, kafka-topics --bootstrap-server broker1A:29092 --list, kafka-console-producer --bootstrap-server broker1A:29092 --topic topic1, kafka-console-producer --bootstrap-server broker1B:29093 --topic topic1, kafka-console-consumer --bootstrap-server broker1B:29093 --topic clusterA.topic1 --from-beginning, docker-compose stop broker1B broker2B broker3B, docker-compose start broker1B broker2B broker3B, https://github.com/mroiter-larus/kafka-docker-mm2, Timestamp-based recovery (consumer group checkpoint), Single place (a properties file) where to configure the entire cross-cluster replication, Topics configuration, partitions and ACLs synced. The expectation is that producers and consumers connect to active clusters only. Instead of prepending the name with the name of the source cluster, the topic retains its original name. You can provide any name you require. OAuth 2.0 authorization mechanism", Collapse section "4.11.1. If you run into any issues, please raise a GitHub issue. MBeans matching kafka.producer:type=producer-metrics,client-id=*, 16.6.2. Deploying the Cruise Control Metrics Reporter, 12.4. Setting up tracing for MirrorMaker and Kafka Connect", Expand section "15. Lets create a topic topic1 into both clusters: 7. MirrorMaker 2.0 uses its MirrorHeartbeatConnector to emit heartbeats that perform these checks. The previous version of MirrorMaker continues to be supported, by running MirrorMaker 2.0 in legacy mode. MirrorMaker 2.0 can be used with more than one source cluster.
MBeans matching kafka.connect:type=connect-metrics,client-id=*, 16.8.2. Upgrading Kafka brokers and ZooKeeper, 15.5. As a replacement, use MirrorMaker 2.0 with the IdentityReplicationPolicy. 3.
Kafka Connect in distributed mode", Collapse section "7.2. Weve seen how to setup MirrorMaker 2.0 in a dedicated instance. The MirrorMaker 2.0 architecture supports unidirectional replication in an active/passive cluster configuration. If you want to immediately replicate the data, you just need to restart MirrorMaker (re-running step 4) and the game is done. MBeans matching kafka.connect:type=sink-task-metrics,connector=*,task=*, 16.8.8. Stopping an active cluster rebalance, 13.2. When enabled, ACLs are applied to synchronized topics. Creating reassignment JSON files manually, 7.1.1. Lets verify if the message sent during the downtime period has been replicated: If you dont see the replicated events immediately, just wait until MirrorMaker resync data between clusters (it may take few minutes).
and rerun the step-4 command. Enabling ZooKeeper ACLs for a new Kafka cluster, 4.8.3. OAuth 2.0 Kafka broker configuration", Collapse section "4.10.2. Using Kerberos (GSSAPI) authentication", Collapse section "11. Important Kafka broker metrics", Expand section "16.8. A sample configuration properties file is provided in ./config/connect-mirror-maker.properties. OPTION: If you want to synchronize consumer group offsets, add configuration to enable and manage the synchronization: Start ZooKeeper and Kafka in the target clusters: Start MirrorMaker with the cluster connection configuration and replication policies you defined in your properties file: MirrorMaker sets up connections between the clusters. MBeans matching kafka.streams:type=stream-record-cache-metrics,client-id=*,task-id=*,record-cache-id=*, F. Kafka Connect configuration parameters, G. Kafka Streams configuration parameters, configure MirrorMaker 2.0 to be used in legacy mode, Section13.3.2, Enabling tracing for MirrorMaker 2.0, Recovery of data in the event of a system failure, Restriction of data access to a specific cluster, Provision of data at a specific location to improve latency, Source cluster configuration to consume data from the source cluster, Target cluster configuration to output data to the target cluster, The connector managing connectivity between clusters is running, Decreasing the default value in bytes of the, Increasing the default value in milliseconds of the, Increasing the number of nodes for the workers that run tasks, Connection information for each cluster, including TLS authentication. Setting up AMQ Streams to use Kerberos (GSSAPI) authentication, 12. Tuning Kafka configuration", Expand section "6.1.1. Setting up tracing for Kafka clients", Collapse section "13.2. Synchronization is disabled by default.
When using the IdentityReplicationPolicy in the source connector, it also has to be configured in the checkpoint connector configuration. Example client authentication flows using the SASL PLAIN mechanism, 4.10.6. MBeans matching kafka.streams:type=stream-[store-scope]-metrics,client-id=*,task-id=*,[store-scope]-id=*, 16.9.5. MBeans matching kafka.producer:type=producer-metrics,client-id=*,node-id=*, 16.6.3. Cluster names are configurable through the clusters property. 2. ZooKeeper authentication", Expand section "4.7.1. Enabling SASL PLAIN authentication, 4.9.7. Kafka Streams API overview", Collapse section "10. Downloading a Cruise Control archive, 12.3. Join the DZone community and get the full member experience. This setup is just for demonstration purposes being single zookeeper node cluster and on the same host; it is not meant for production. Just be sure to change the bootstrap url, the topic name and the partition accordingly. By flagging the originating cluster, topics are not replicated back to that cluster. Removing log data with cleanup policies, 6.1.1.10. Configuring connectors in Kafka Connect in standalone mode, 7.1.3. Optimization proposals overview", Collapse section "12.6.
If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization. Only MirrorMaker 2.0 can write to remote topics. Kafka consumer configuration tuning, 6.1.3.2. You might need to adjust the values to have the desired effect. Bidirectional replication (active/active), 8.2.2. In this case, we can recover the lost clusterB from the clusterA: Thats it!! 5. Each connector comprises one or more tasks that are distributed across a group of workers that run the tasks. Create a folder to store the consumer and producer config files. Setting up tracing for Kafka clients, 13.2.1. Data replication across clusters supports scenarios that require: MirrorMaker 2.0 has features not supported by the previous version of MirrorMaker. Using Kerberos (GSSAPI) authentication", Expand section "12. What could happen in a production scenario is that one of the two clusters goes down for some reasons. For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. In this situation, you might not want automatic renaming of remote topics. Avoiding data loss or duplication when committing offsets, 6.1.3.5.1. Enabling Server-to-server authentication using DIGEST-MD5, 3.4.3. You need the properties files you currently use with the legacy version of MirrorMaker. If enabled, the synchronization of offsets from the source cluster is made periodically. Enabling TLS client authentication, 4.9.6.
OAuth 2.0 authentication mechanisms", Expand section "4.10.2. You can change the frequency by adding sync.group.offsets.interval.seconds and emit.checkpoints.interval.seconds to the checkpoint connector configuration. Create the producer configuration file and name it as targetClusterProducer.config. Using OAuth 2.0 token-based authentication", Expand section "4.10.1. A heartbeat internal topic checks connectivity between clusters. Adding Kafka clients as a dependency to your Maven project, 10.1. Running multi-node ZooKeeper cluster, 3.4.2. 6. See Oracle Event Hub Cloud Service Dedicated: Access Rules Page.
Reassignment of partitions", Expand section "7.1. Kafka Connect in standalone mode", Collapse section "7.1. Using MirrorMaker 2.0 in legacy mode, 9.1. You can use the offset-syncs.topic.location connector configuration to change this to the target cluster. OAuth 2.0 Kafka broker configuration", Expand section "4.10.5. Simple ACL authorizer", Expand section "4.8. OAuth 2.0 authorization mechanism", Expand section "4.12.
For more information, refer to the Apache Kafka documentation. MirrorMaker 2.0 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. Let's understand this with a simple setup where both clusters exist on the same machine. Adding the Kafka Streams API as a dependency to your Maven project, 11. Try to avoid a situation where a large producer buffer and an insufficient offset flush timeout period causes a failed to flush or failed to commit offsets type of error. Disaster Recovery strategies, and more generally business continuity plans, are one of the most critical points to be implemented in order to minimize data-centers downtime and data loss. Configuring OAuth 2.0 support for Kafka brokers, 4.10.6.3. A single task is handled by one worker, so you dont need more workers than tasks. The number of tasks that are started for these connectors is the lower value between the maximum number of possible tasks and the value for tasks.max. The default for both properties is 60 seconds. Topic configuration synchronization, 8.2.6. Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages. Figure8.1. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. Provide the below details for creating the access rule: Administering Oracle Event Hub Cloud Service Dedicated, Oracle Event Hub Cloud Service Dedicated: Access Rules Page, Connecting to a Cluster Node Through Secure Shell (SSH). That's it, I hope this article will help you have a basic idea of mirroring or replicating data from one Kafka cluster to another Kafka cluster. Kafka broker configuration tuning", Expand section "6.1.2. You can provide any name you require. Overview of AMQ Streams", Expand section "2.4. Presenting Kafka Exporter metrics in Grafana, 15.3. Create a Oracle Event Hub Cloud Service topic in the cluster that you want to mirror.
Basically, MM2 introduces the following features: MM2 can replicate data in directional flows expressed with the notation source->target, for example: A typical use-case of MM2 is a multi-cluster environment where clusters could be in the same data center or across multiple data centers. One Kafka Cluster is the source and the other is the target. Optimizing throughput and latency, 6.1.3.5. Instrumenting Kafka Streams applications for tracing, 13.3. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Internal topic settings for transactions and commits, 6.1.1.4. This is considered as the source. Configuration for Kafka nodes. MBeans matching kafka.connect:type=task-error-metrics,connector=*,task=*, 16.9.1. Upgrading consumers and Kafka Streams applications to cooperative rebalancing, 16.3. Example client authentication flows using the SASL OAUTHBEARER mechanism, 4.10.5.2. With MirrorMaker, we just need to send the same events on both topic1 and MM2 will take care of keeping the two clusters synchronized. Start zookeeper nodes and Kafka nodes. MBeans matching kafka.consumer:type=consumer-coordinator-metrics,client-id=*, 16.7.4. Example MirrorMaker 2.0 configuration for handling high volumes of messages. OAuth 2.0 authentication mechanisms, 4.10.1.1. The worst scenario could be that the cluster is permanently lost. Save the changes and restart MirrorMaker with the properties files you used with the previous version of MirrorMaker: The consumer properties provide the configuration for the source cluster and the producer properties provide the target cluster configuration. This procedure describes how to implement MirrorMaker 2.0 by creating the configuration in a properties file, then passing the properties when using the MirrorMaker script file to set up the connections. The recommended pattern is for messages to be produced locally alongside the source Kafka cluster, then consumed remotely close to the target Kafka cluster. Setting up tracing for Kafka clients", Expand section "13.3. This is the destination in which the source topic will be mirrored. Monitoring your cluster using JMX", Expand section "16.5. Kafka Connect in standalone mode", Expand section "7.2. Fast local JWT token validation configuration, 4.10.2.4. Controlling the log flush of message data, 6.1.1.11. Configuring OAuth 2.0 authentication, 4.10.6.1. A MirrorMaker 2.0 cluster is required at each target destination. Scaling data consumption using consumer groups, 6.1.3.4. With this configuration applied, topics retain their original names. Configuring OAuth 2.0 with properties or variables, 4.10.2. Kafka Streams MBeans", Red Hat JBoss Enterprise Application Platform, Red Hat Advanced Cluster Security for Kubernetes, Red Hat Advanced Cluster Management for Kubernetes, 1.1. You can use MirrorMaker 2.0 in active/passive or active/active cluster configurations. Kafka consumer configuration tuning", Collapse section "6.1.3. Doing so, if a cluster goes down, all the data are still available on the other one. Monitoring your cluster using JMX", Collapse section "16. The following table describes connector properties and the connectors you configure to use them. The mm2.properties file to configure MirrorMaker is the following: Before we start, make sure that your Docker environment has at least 8 GB of allocated memory in order to avoid out-of-memory errors. The default is.
All processes run on the same host. Cruise Control for cluster rebalancing, 12.2. Defines the separator used for the renaming of remote topics. However, increasing the frequency of the operation might affect overall performance. Workers are assigned one or more tasks. The heartbeat connector always uses a single task. SSH into the destination cluster. Kafka Connect MBeans", Collapse section "16.8. Create folders for zookeeper and Kafka logs. Keep your systems secure with Red Hat's specialized responses to security vulnerabilities. Configuring and starting Cruise Control, 12.6.1.
Kafka Connect in distributed mode", Expand section "8. The process of mirroring data from one cluster to another cluster is asynchronous. We are data engineers and data scientists team passionate about data and top quality data insight-driven software solutions. From the inside of the container, start MirrorMaker: java.io.FileNotFoundException: /usr/bin/../config/connect-log4j.properties (No such file or directory), dont panic! Tuning Kafka configuration", Collapse section "6.1. A MirrorMaker 2.0 MirrorSourceConnector replicates topics from a source cluster to a target cluster. Edit the MirrorMaker consumer.properties and producer.properties files to turn off MirrorMaker 2.0 features. Using the target cluster as the location of the offset-syncs topic allows you to use MirrorMaker 2.0 even if you have only read access to the source cluster. Recovering from failure to avoid data loss, 6.1.3.8. The __consumer_offsets topic stores information on committed offsets, for each consumer group.
Kafka provides MirrorMaker sink and source connectors for data replication. OAuth 2.0 authorization mechanism, 4.11.2. Configuring OAuth 2.0 authorization support, 4.12. Example: /u01/oehpcs/confluent/etc/mirror-maker. This ensures that the mirrored consumer offsets will be applied for the correct topics. MBeans matching kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*, 16.7.5. Kafka Exporter alerting rule examples, 14.5. See Connecting to a Cluster Node Through Secure Shell (SSH). With the provided docker-compose file we are going to simulate two Kafka clusters, named clusterA and clusterB (as specified in mm2.properties file), and we are going to use MM2 in order to replicate data across them in an active/active scenario. The MirrorMaker 2.0 architecture supports bidirectional replication in an active/active cluster configuration.
For the checkpoint connector, the maximum number of tasks possible is one for each group being replicated from the source cluster. MirrorMaker 2.0 features disabled, including the internal, Expand section "1. Controlling transactional messages, 6.1.3.6.
MBeans matching kafka.consumer:type=consumer-metrics,client-id=*,node-id=*, 16.7.3. MirrorMaker 2.0 tracks offsets for consumer groups using internal topics. OAuth 2.0 authentication mechanisms", Collapse section "4.10.1. Overview of AMQ Streams", Collapse section "1. Create another Oracle Event Hub Cloud Service topic in another cluster. MBeans matching kafka.connect:type=source-task-metrics,connector=*,task=*, 16.8.9. Setting up tracing for MirrorMaker and Kafka Connect", Collapse section "13.3. MirrorMaker 2.0 (MM2), based on the Kafka Connect framework, is the new open-source solution able to manage multi-cluster environments and cross-data-center replication. Configuring Kafka Connect in standalone mode, 7.1.2. Using the Kafka Bridge to connect with a Kafka cluster, 2.4.1. Use MirrorMaker 2.0 to synchronize data between Kafka clusters through configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. Configuring OAuth 2.0 authentication", Expand section "4.11. Kafka consumer configuration tuning", Expand section "6.1.3.5. An offset flush timeout period (offset.flush.timeout.ms) is the time to wait for the producer buffer (producer.buffer.memory) to flush and offset data to be committed. Over 2 million developers have joined DZone. Scaling Kafka clusters", Collapse section "6.3.1. Synchronizing data between Kafka clusters using MirrorMaker 2.0, 8.8. Rebalance performance tuning overview, 12.11. By synchronizing configuration properties, the need for rebalancing is reduced. You can check out my other articleon Kafka, which would help to have basic idea of Apache Kafka setup and commands. Connect to Kafka container dedicated to MirrorMaker: 4. - Consume for Kafka nodes on 1st Cluster. Increase visibility into IT operations to detect and resolve technical issues before they impact your business. MBeans matching kafka.connect:type=connect-metrics,client-id=*,node-id=*, 16.8.3. Don't confuse it with the replication of data among Kafka nodes of the same cluster. Tasks run in parallel. Using OAuth 2.0 token-based authorization", Expand section "4.11.1. Synchronization is not enabled by default. Loading configuration values from environment variables, 3.3. Create the consumer configuration file and name it as sourceClusterConsumer.config. If there are more tasks than workers, workers handle multiple tasks. Enabling ZooKeeper ACLs in an existing Kafka cluster, 4.9.5. OPTION: If required, add a policy that overrides the automatic renaming of remote topics. Avoiding unnecessary consumer group rebalances, 6.1.2. MirrorMaker 2.0 is used to replicate data between two or more active Kafka clusters, within or across data centers. Kafka producer configuration tuning, 6.1.2.5. For each target cluster, verify that the topics are being replicated: This procedure describes how to configure MirrorMaker 2.0 to use it in legacy mode.
Setting limits on brokers using the Kafka Static Quota plugin, 6.3.1.2. Using OAuth 2.0 token-based authentication, 4.10.1. Encryption and authentication", Expand section "4.10.
For Example, If the source topic name is topic1 , then the destination topic name should be topic1replica. MBeans matching kafka.producer:type=producer-topic-metrics,client-id=*,topic=*, 16.7.1. Setting up tracing for MirrorMaker and Kafka Connect, 13.3.2. If you want to make sure about this, just run the simple Java application Ive provided in the GitHub repository linked above and checkout the offsets for every topic1 and clusterA.topic1 partition. Open a new Terminal window, then connect to one of the broker in the target cluster, and checkout the topics list into both clusters by running.
Removing brokers from the cluster, 6.3.2.2. This type of error means that there are too many messages in the producer buffer, so they cant all be flushed before the offset flush timeout is reached. Instrumenting producers and consumers for tracing, 13.2.3. Data storage considerations", Expand section "3. Scaling Kafka clusters", Expand section "6.3.2. Dynamically change logging levels for Kafka broker loggers, 6.1.1.2. Cruise Control for cluster rebalancing", Collapse section "12. 8. ZooKeeper authorization", Collapse section "4.8. If you have two Oracle Event Hub Cloud Service Dedicated clusters, you can setup mirror for the topic that is present in one cluster to another topic that is present in a different cluster.
MBeans matching kafka.streams:type=stream-metrics,client-id=*, 16.9.2. If you are getting this type of error, try the following configuration changes: The changes should help to keep the underlying Kafka Connect queue of outstanding messages at a manageable size.
Simple ACL authorizer", Collapse section "4.7.1. Dont worry about the output order of the events, because MirrorMaker replicates also topic configurations, including checkpoints and offsets. Configuring connectors in distributed Kafka Connect, 8. Source cluster and target cluster are independent of each other. As a result, Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well.
With the KIP-382 article, MM2 is accepted as the official solution by Apache Kafka. Cluster configuration", Collapse section "8.2. Specifying a maximum number of tasks, 8.7. MBeans matching kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*,topic=*,partition=*, 16.8.1.
Enabling tracing for Kafka Connect, 13.4. 9. Total 4 Kafka nodes, 2 node connect to 2181 and other 2 to 2182. Running a single node AMQ Streams cluster, 2.9. Once the cluster is up and running, MM2 automatically resync topics and all its configurations. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds property, which is performed every 10 minutes by default. Partition rebalancing for availability, 6.1.1.13. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. OAuth 2.0 Kafka client configuration, 4.10.5. Despite this, MirrorMaker has some weaknesses which do not make it the ideal solution for disaster recovery cases. Emulate the previous version of MirrorMaker. 4. MBeans matching kafka.streams:type=stream-task-metrics,client-id=*,task-id=*, 16.9.3. This running mode does not need a running Connect cluster: it leverages a high-level driver which generates a set of Connect workers based on the mm2.properties configuration file. Increasing bandwidth for high latency connections, 6.1.1.6. Synchronizing consumer group offsets, 8.4. Cruise Control for cluster rebalancing", Expand section "12.6. From the broker1B Docker container, to which we connected at step 8, do the same thing: 10. Kafka producer configuration tuning", Collapse section "6.1.2. OAuth 2.0 client authentication flows", Expand section "4.10.6. AMQ Streams and Kafka upgrades", Collapse section "15. This means that every message has his corresponding offset, so that the event ordering will be guaranteed. Enabling Client-to-server authentication using DIGEST-MD5, 4.8.2. Kafka Streams MBeans", Collapse section "16.9. Open the sample properties file in a text editor, or create a new one, and edit the file to include connection information and the replication flows for each Kafka cluster. Table8.1. MBeans matching kafka.connect:type=connector-task-metrics,connector=*,task=*, 16.8.7. Create topic mirrormakerPOC on both Kafka clusters with same number of partition. However, this is not mandatory, but it is helpful if you would like to explore MirrorMaker logs. Configuring ZooKeeper", Collapse section "3.
Reassignment of partitions", Collapse section "6.3.2. The MirrorMaker script /opt/kafka/bin/kafka-mirror-maker.sh can run MirrorMaker 2.0 in legacy mode. We can simulate this scenario by stopping the containers of the clusterB, for example: Our hypothetical application will still be available for both reading and writing since we can still produce and consume events into topic1 (inside clusterA). MirrorMaker 2.0 monitors source topics and propagates any configuration changes to remote topics, checking for and creating missing partitions.
MBeans matching kafka.streams:type=stream-processor-node-metrics,client-id=*,task-id=*,processor-node-id=*, 16.9.4. Adjusts the frequency of checks for offset tracking. Avoiding data loss or duplication when committing offsets", Collapse section "6.1.3.5. Enabling SASL SCRAM authentication, 4.10. Connecting to the JVM from a different machine, 16.6.1. Connectors create the tasks that are responsible for moving data in and out of Kafka. You need read/write access to the cluster that contains the topic. MBeans matching kafka.connect:type=connect-worker-rebalance-metrics, 16.8.5.