Remote Device Autodiscovery

Outlyer’s agent allows users to autodiscover remote device instances using various mechanisms. This enables agentless monitoring of remote devices on which you cannot install an agent to monitor them. The discovered devices will appear in the Host Map view as distinct devices that you can select on to run checks remotely against them.

Users can use this feature to monitor things such as:

  • Hardware devices in a data center such as routers where an agent cannot be installed on the device
  • Cloud services such as Amazon RDS or ELB where you cannot install an agent on the devices

This feature is very flexible. It relies on a discovery script and discovery check deployed on the agent. The discovery script will run at regular invervals (as defined by the discovery check) to discover the device instances and all their labels/annotations. The plugin script can do this via an API, database of devices or network scanning, whatever method makes sense to autodiscover the devices.

This will dynamically add and remove devices when the plugin updates the list of devices discovered, and each device will appear as a distinct instance in the Outlyer UI that can be selected to run checks using its labels, just like any other instance in Outlyer that is running an agent already.

Writing a Discovery Check

In order to autodiscover remote devices, you need to deploy a check YAML file that defines the check settings, and a plugin that will be run by the check to return a list of instances.

The check YAML file must be deployed under the agent’s conf.d folder to enable the check, and the plugin must be deployed under the agent’s plugin folder. On Linux these folders will be found under /etc/outlyer by default.

For the example, a simple remote_instances check will be used to create a list of hardcoded devices from a plugin:


  remote-instances:                                     # The check name (id)
    # The command to run the discovery script
    command: 'python3 ./'
     # The number of seconds between script runs, in this case 5 mins
    interval: 300
    # Flag to disable the check if needed
    disabled: false
    # The timeout in seconds before the discovery script is terminated
    timeout: 120

    # Optional labels to add to all autodiscovered devices
    # See labels section of agent.yaml for more info
      environment: 'prod'

    # Optional metric labels to apply to all autodiscovered devices.
    # See metrric_labels section of agent.yaml for more info
      - 'environment'

    # This is the primary check to determine the status of the remote devices
    check_command: 'python3 ./'
    # The number of seconds between primary check runs to check the status of the devices
    check_interval: 30

    # Optional environment variables to be passed to the discovery script and check_command

The check_command generates the host.status for each instance to determine if the device is OK (Green), WARNING (Yellow) or CRITICAL (red) in the Outlyer UI, determined by the exit code of the check command. If you don’t want to evaluate the host status for every instance and just have them all OK (Green) in the UI by default, simply comment out the check_command field in the YAML file.


The plugin should return a JSON output of instances as shown below with a single example instance:

    "instances": [
            "hostname": "",
            "ip": "",
            "labels": {
                "label1": "value1",
                "label2": "value2"
            "annotations": {
                "annotation1": "value1",
                "annotation2": "value2"
    "version": "0.1.0"

An instance has the following fields:

Field Required? Description
hostname Yes The hostname of the instance. This must be unique for each instance in your account.
ip Yes The IP address of the instance relative to the agent. This is used by the agent plugin checks to make remote requests against the device instance.
labels Yes A hashmap of key/value labels for the instance. These labels can be used as check selectors and to filter/group in your account host map. Please refer to our labels documentation if you want to align your instance labels with Outlyer’s default labels.
annotations No A hashmap of key/value annotations for the instance. You can use this to collect additional metadata about your device instance that isn’t appropriate for lables such as a list of ports open on that device.

You can find several examples of Discovery plugins for AWS such as the script for AWS EC2 instances. Below is a simple example in Python you can use as a starting point to iterate on, and will return a number of hard coded instances up to the value given by the DEVICE_NUMBER environment variable in the discovery check configuration:

import json
import os

class MyDiscovery(object):

    def discover(self):

        instances = []

        i = 0
        while i < os.environ.get('DEVICE_NUMBER'):

            host = {
                'hostname': f"host-{i}",
                'ip': '',
                'labels': {
                    "environment": "test",
                    "instance.type": "device",
                    "cloud.instance.region": f"us-west-{i}"
                'annotations': {}

            i = i + 1

        # Output instance in JSON to Stdout
        print(json.dumps({"instances": instances, "version": "0.1.0"}))
        return 0

if __name__ == '__main__':


Unlike monitoring checks that can be written and deployed via the UI, discovery checks must be deployed to disk alongside the agent. When the agent starts up it will automatically find any discovery checks in the conf.d folder, and then run the associated plugin under the plugins folder to get the list of instances.

Once deployed you should see all the discovered instances in the Host View with their lables and annotations, and be able to run monitoring checks against them remotely using label selectors just like instances with agents installed. Below is an example of the host view with our EC2 Discovery script running against AWS’s APIs grouped by Availability Zone and Instance Type:

Host Map

Please note that there will be a limit to the number of remote instances and associated monitoring checks you can run against them on a single agent, which will be limited by the size of the server the agent is installed on, and what monitoring checks are being run against the instances on that agent. If you start to see load spikes on the server running the agent, you may want to split your device discovery over serveral agents.