Priority buckets and labeling

Prev Next

This section describes infrastructure enhancements used to optimize core services in Neeve Secure Edge (formerly View Secure Edge) and control the sequence in which services are started.

Priority buckets for services

Currently, all services are started with equal priority. However there are scenarios where an application service or a core service may start before another core service on which it depends. This leads to multiple service restarts and delays in bringing services online after a node reboot.

To avoid this delay, Secure Edge infrastructure assigns priority buckets to core services to control the start-up sequence.

Service dependency and priority recommendation

For Secure Edge core services—DHCP, PowerDNS and Postgres, service dependencies are as follows:

  • Power DNS requires Postgres
  • DHCP requires both Postgres and Power DNS (in case of Dynamic DNS update)

The recommendation is to assign the priority (with 0 the highest to 7 the lowest priority) as follows:

ServicePriority

Postgres (Core)

0

PowerDNS (Core)

1

DHCP (Core)

2

Default priority for other services

7

Assign priority to a service using the following label definition in the respective pod specification:

io_iotium_pod_priority: 2

Special labels for service specifications

This section describes the special labels for services in Secure Edge.

Labels for service deployment in clusters

NameTypeRequiredDescription

_iotium_master_elect

String

False

Value: subscribed will set the environment variable IOTIUM_NODE_ROLE as master/slave based on node role in a cluster. Application services use this information to differentiate between the service instance running in cluster master or slave.

_iotium_master_elect_ip / _iotium_master_elect_ip_prefixlen

String

False

Provide the static IP Address and prefix length of the application service instance running on the Master (the node with IOTIUM_NODE_ROLE equal to Master). For this to take effect, _iotium_master_elect (above) must be subscribed to.

These labels ensure that application services running on the master node are properly differentiated from replica instances.

  • The "_iotium_master_elect:subscribed" label ensures that the application services get this information.
  • The "iotium_master_elect_ip:<IPaddress>" and "_iotium_master_elect_ip_prefixlen" labels contain information about the static IP address to use for replica instance of service. This static IP address is applied to the replica instance that is running in the MASTER node within a cluster.

Label for core services (to avoid container time zone conflicts)

NameValuesRequiredDescriptionValid deployments

_iotium_core_service

true / false

True (Secure Edge core service)
False (other services)

Marks a service as "core service" to ensure it adheres to the node time zone.

Standalone nodes and clusters

Users can change the application service container time zone, as needed. While all the application services adhere to that time zone, the node will remain in the time zone (default UTC) as configured. It is desirable to have core services work in the same time zone as the node.

To avoid any impact by Container Time Zone setting changes, core services must be labeled explicitly as core services, as follows:

"_iotium_core_service": "true" key-pair

Label for pod priority

NameValuesRequiredUsageValid deployments

io_iotium_pod_priority

0-2: Reserved for Secure Edge core
services.
3-7: User application service priority scheduling.

False

Sets priority for service startup, ensuring dependent services are started first. Refer to Priority buckets for more details.

Standalone nodes and clusters

Labels to avoid service restart during master failover

NameValuesRequiredDescriptionValid deployments

_iotium_master_elect_env_volume

String

Volume name given in the volumes section of pod specification.

False

Give the volume name where the IOTIUM_NODE_ROLE and cluster related env will be loaded. Instead of setting the ENV variables in the service specification, they are written into a file "runtime.env" in the volume specified.

Avoids restart of replica service instance when there is master failover.

Clusters

Instead of requiring all services to restart, this provides a mechanism where the services seamlessly change their roles without undergoing a restart. The replica application service mode with the service restart avoidance configuration effectively reduces the downtime on cluster master failover.

Set the volume where the node role-related environment variables are written and disable updating the same as service environment variables.

The above two labels help accomplish this. In addition, you need to ensure that the DNS policy for the service is set to None, and the DNS IP address is set. Please refer to the PowerDNS / Postgres service specification for details.

The application service must be able to read from the environment variable file if this feature is used.

Special labels for core services

This section describes the POST bodies of PowerDNS, PostgreSQL, DHCP, and NTP that exercise the controls described in this section. It also includes an option for enabling remote logging for the services.

PostgreSQL priority and core service setting

For PostgreSQL, the priority label is set to 0 and the core service label is set to true. Postgres service image version (iotium/postgres:12.3.0-3-amd64) is required.

To bring up the Postgres core service:

  1. Update the existing postgres service spec as necessary, including changes to IPs, Secret IDs, Network ID, Cluster ID, and DNS.
  2. Make sure to add the following labels in the "labels" section of the pod spec:
"io_iotium_pod_priority": "0"
"_iotium_core_service": "true"

Postgres service specification example:

{
	"name": "DB",
	"labels": {
		"io_iotium_template": "postgresqacluster",
		"_iotium_master_elect": "subscribed",
		"_iotium_master_elect_ip": "10.102.0.2",
		"_iotium_master_elect_ip_prefixlen": "24",
		"_iotium_master_elect_env_volume": "iotium-vol",
		"_iotium_master_elect_set_env": "disable",
                       "io_iotium_pod_priority": "0",
                       "_iotium_core_service": "true"
	},
	"networks": [{
		"network_id": "n-82e48b339e69df75"
	}],
	"services": [{
		"image": {
			"name": "iotium/postgres",
			"version": "12.3.0-3-amd64"
		},
		"docker": {
			"environment_vars": {
				"POSTGRESQL_PASSWORD": "postgres",
				"DHCP_DB_NAME": "dhcp",
				"DHCP_DB_USER": "dhcp",
				"DHCP_DB_PASSWORD": "dhcp",
				"DNS_DB_NAME": "pdns",
				"DNS_DB_USER": "pdns",
				"DNS_DB_PASSWORD": "pdns",
				"POSTGRESQL_MASTER_HOST": "10.102.0.2"
			},
			"volume_mounts": [{
					"name": "datadir",
					"mount_path": "/bitnami/postgresql"
				},
				{
					"mount_path": "/config/",
					"name": "iotium-vol",
                                                           "read_only": true
				}
			]
		},
		"liveness_probe": {
			"exec": {
				"command": ["/healthcheck.sh"]
			},
			"initial_delay_seconds": 10,
			"timeout_seconds": 5,
			"period_seconds": 30,
			"success_threshold": 1,
			"failure_threshold": 3
		},
		"image_pull_policy": "IfNotPresent"
	}],
	"volumes": [{
		"name": "datadir",
		"emptyDir": {}
	}, {
		"name": "iotium-vol",
		"emptyDir": {}
	}],
	"dns_policy": "None",
	"dns": [
		"8.8.8.8",
		"10.102.0.3"
	],
	"termination_grace_period_in_seconds": 60,
	"kind": "REPLICA",
	"cluster_id": "67664978-55c3-4e56-b04e-a1dd59a5496e",
	"node_selector": {
		"_iotium.cluster.candidate": true
	}
}

PowerDNS service priority and core service setting

For PowerDNS service, the priority label is set to 1 and the core service label is set to true.

The PowerDNS image version iotium/powerdns:4.0.8-3-amd64) can be used instead of the readiness probe scripts.

To bring up the PowerDNS core service:

  1. Update the pod spec as necessary,  including changes to IPs, Secret IDs, Network ID, Cluster ID, and DNS.
  2. Make sure to add the following labels in the "label" section of the pod spec:
"io_iotium_pod_priority": "1"
"_iotium_core_service": "true"

PowerDNS service specification for API example

The example specification is used to run PowerDNS in Replica mode in a cluster with no service restart on cluster master failover.

{
	"name": "DNS",
	"labels": {
		"io_iotium_template": "pdns-2208",
		"_iotium_core_service": "true",
		"io_iotium_pod_priority": "1",
		"_iotium_master_elect": "subscribed",
		"_iotium_master_elect_ip_prefixlen": "24",
		"_iotium_master_elect_ip": "10.200.100.4",
		"_iotium_master_elect_set_env": "disable",
		"_iotium_master_elect_env_volume": "iotium-vol",
		"_iotium_template": "pdns-2208"
	},
	"networks": [{
		"network_id": "n-6a2c225bfb36ec6f"
	}],
	"services": [{
			"name": "pdnsrecursor",
			"image": {
				"name": "iotium/dnsrecursor",
				"version": "4.5.8-1-amd64"
			},
			"docker": {
				"environment_vars": {
					"PDNS_API_KEY": "changeme",
					"PDNS_WEBSERVER_ALLOW_FROM": "0.0.0.0/0",
					"PDNS_ALLOW_RECURSION": "",
					"PDNS_RECURSOR": ""
				},
				"volume_mounts": [{
					"name": "zonefile",
					"mount_path": "/var/pdns/zonefiles",
					"read_only": false
				}]
			},
			"image_pull_policy": "IfNotPresent"
		},
		{
			"image": {
				"name": "iotium/powerdns",
				"version": "4.5.4-1amd64"
			},
			"docker": {
				"environment_vars": {
					"PDNS_GPGSQL_PASSWORD": "pdns",
					"PDNS_ALLOW_DNSUPDATE_FROM": "10.200.100.5",
					"PDNS_GPGSQL_DBNAME": "pdns",
					"PDNS_GPGSQL_USER": "pdns",
					"PDNS_API_KEY": "changeme",
					"PDNS_WEBSERVER_ALLOW_FROM": "0.0.0.0/0",
					"PDNS_GPGSQL_HOST": "10.200.100.3",
					"ENABLE_REMOTE_LOGGING": "true"
				},
				"volume_mounts": [{
						"name": "zonefile",
						"mount_path": "/var/pdns/zonefiles"
					},
					{
						"name": "iotium-vol",
						"mount_path": "/iotium",
						"read_only": "true"
					},
					{
						"name": "named",
						"mount_path": "/var/pdns/config"
					},
					{
						"name": "logs",
						"mount_path": "/var/log"
					}
				]
			},
			"image_pull_policy": "IfNotPresent"
		},
		{
			"image": {
				"name": "fluent/fluent-bit",
				"version": "1.5"
			},
			"docker": {
				"volume_mounts": [{
						"name": "logs",
						"mount_path": "/var/log"
					},
					{
						"name": "fluentbit",
						"mount_path": "/fluent-bit/etc/"
					}
				]
			},
			"image_pull_policy": "IfNotPresent"
		}
	],
	"volumes": [{
			"name": "zonefile",
			"secret_volume": {
				"secret": "217a8876-ef07-4785-bf38-600dc0f23026"
			}
		},
		{
			"name": "iotium-vol",
			"emptyDir": {}
		},
		{
			"name": "named",
			"secret_volume": {
				"secret": "ff13a50f-db22-4f5d-b36a-d24abf5bd5c7"
			}
		},
		{
			"name": "logs",
			"emptydir": {}
		},
		{
			"name": "fluentbit",
			"secret_volume": {
				"secret": "64c883e5-76f1-455d-b995-f61b9ac839e9"
			}
		}
	],
	"dns_policy": "None",
	"dns": [
		"10.200.100.4"
	],
	"kind": "REPLICA",
	"cluster_id": "1ed3a792-5a18-4b95-be25-30873654535b",
	"node_selector": {
		"_iotium.cluster.candidate": "true"
	}
}

DHCP service priority and core service setting

For the DHCP service instance the priority label is set to 2 and the core service label is set to true.

To bring up the DHCP core service:

  1. Update the existing pod as necessary, including changes to IPs, Secret IDs, Network ID, Cluster ID, and DNS.
  2. Make sure to add the following labels in the "label" section of the pod spec:
"io_iotium_pod_priority": "2"
"_iotium_core_service": "true"

DHCP service specification for API example:

{
	"kind": "SINGLETON",
	"name": "DHCP",
	"cluster_id": "96cfabed-9410-4b97-be56-71dd9ffc2e7f",
	"networks": [{
		"network_id": "n-4900829c7c563ffd",
		"ip_address": "172.31.0.5"
	}],
	"labels": {
		"io_iotium_pod_priority": "2",
		"_iotium_core_service": "true",
		"io_iotium_template": "dhcpqacluster",
		"io_iotium_fileName": ""
	},

	"services": [{
			"docker": {
				"volume_mounts": [{
						"mount_path": "/etc/kea/",
						"name": "dhcp3"
					},
					{
						"mount_path": "/var/lib/kea",
						"name": "leasedir"
					},
					{
						"mount_path": "/etc/keaddns/",
						"name": "ddns3"
					},
					{
						"mount_path": "/var/log",
						"name": "logs"
					}
				]
			},
			"image_pull_policy": "IfNotPresent",
			"image": {
				"version": "1.6.2-2-amd64",
				"name": "iotium/dhcpd"
			}
		},
		{
			"docker": {
				"volume_mounts": [{
						"mount_path": "/var/log",
						"name": "logs"
					},
					{
						"mount_path": "/fluent-bit/etc/",
						"name": "fluent-bit.conf"
					}
				]
			},
			"image_pull_policy": "IfNotPresent",
			"image": {
				"version": "1.5",
				"name": "fluent/fluent-bit"
			}
		}
	],

	"volumes": [{
			"secret_volume": {
				"secret": "fdbfe29a-e1b3-4c17-9a12-f2c3697ac553"
			},
			"name": "dhcp3"
		},
		{
			"emptyDir": {},
			"name": "leasedir"
		},
		{
			"secret_volume": {
				"secret": "757d6de6-3f74-40db-b7f1-0626c0b5f789"
			},
			"name": "ddns3"
		},
		{
			"emptydir": {},
			"name": "logs"
		},
		{
			"secret_volume": {
				"secret": "ab2179c7-134f-48a9-9414-afd15727e7c0"
			},
			"name": "fluent-bit.conf"
		}
	]
}

NTP service priority and core service setting

You don’t need to set a priority for the NTP service. Set the core service label to true to ensure the service is not affected by the container timezone configuration.

To bring up the NTP core service:

  1. Update the existing pod as necessary, including changes to IPs, Secret IDs, Network ID, Cluster ID, and DNS.
  2. Make sure to add the following label in the "label" section of the pod spec:
"_iotium_core_service": "true"

NTP service specification request body for API example:

{
	"kind": "SINGLETON",
	"name": "NTP",
	"cluster_id": "96cfabed-9410-4b97-be56-71dd9ffc2e7f",
	"networks": [{
		"network_id": "n-4900829c7c563ffd",
		"ip_address": "172.31.0.6"
	}],
	"labels": {
		"_iotium_template": "ntpqacluster",
		"io_iotium_template": "ntpqacluster",
		"_iotium_core_service": "true"
	},
	"services": [{
			"docker": {
				"volume_mounts": [{
					"mount_path": "/var/log",
					"name": "logs"
				}],
				"cap_add": [
					"SYS_TIME",
					"SYS_RESOURCE"
				],
				"environment_vars": {
					"ENABLE_REMOTE_LOGGING": "true"
				}
			},
			"image_pull_policy": "IfNotPresent",
			"image": {
				"version": "4.2.8p10-2-amd64",
				"name": "iotium/ntp"
			}
		},
		{
			"docker": {
				"volume_mounts": [{
						"mount_path": "/var/log",
						"name": "logs"
					},
					{
						"mount_path": "/fluent-bit/etc/",
						"name": "fluent-bit.conf"
					}
				]
			},
			"image_pull_policy": "IfNotPresent",
			"image": {
				"version": "1.5",
				"name": "fluent/fluent-bit"
			}
		}
	],
	"volumes": [{
			"emptydir": {},
			"name": "logs"
		},
		{
			"secret_volume": {
				"secret": "ab2179c7-134f-48a9-9414-afd15727e7c0"
			},
			"name": "fluent-bit.conf"
		}
	]

}