4

What I would like to achieve:

  • 1: Spin up a local insecure Elasticsearch with version >8.6 (which enforces security by default, but this is for testing only, I understand the risks)

  • 2: Set up an insecure instance of APM Server talking to the insecure Elastic instance stated above (again, I understand the risks)

What I tried:

  • 1: configure Elastic with the following properties:
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features 
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: false

xpack.security.enrollment.enabled: false

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: false
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["elastic"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

Elastic starts ok, the curl without certificate, without username password yields correct response.

  • 2: started APM Server

Unfortunately, APM Server is not working: "publish_ready": false,

{
  "build_date": "2023-02-13T13:01:54Z",
  "build_sha": "8638b035d700e5e85e376252402b5375e4d4190b",
  "publish_ready": false,
  "version": "8.6.2"
}

Here is the stacktrace:

elastic@elastic:~/apm-server-8.6.2-linux-x86_64$ ./apm-server -e
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.604+0800","log.origin":{"file.name":"beatcmd/beat.go","file.line":138},"message":"Home path: [/home/elastic/apm-server-8.6.2-linux-x86_64] Config path: [/home/elastic/apm-server-8.6.2-linux-x86_64] Data path: [/home/elastic/apm-server-8.6.2-linux-x86_64/data] Logs path: [/home/elastic/apm-server-8.6.2-linux-x86_64/logs]","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.627+0800","log.origin":{"file.name":"beatcmd/beat.go","file.line":145},"message":"Beat ID: 4b688e80-922a-47a4-9822-5fc547350902","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.629+0800","log.logger":"beat","log.origin":{"file.name":"beatcmd/beat.go","file.line":573},"message":"Beat info","service.name":"apm-server","system_info":{"beat":{"path":{"config":"/home/elastic/apm-server-8.6.2-linux-x86_64","data":"/home/elastic/apm-server-8.6.2-linux-x86_64/data","home":"/home/elastic/apm-server-8.6.2-linux-x86_64","logs":"/home/elastic/apm-server-8.6.2-linux-x86_64/logs"},"type":"apm-server","uuid":"4b688e80-922a-47a4-9822-5fc547350902"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.629+0800","log.logger":"beat","log.origin":{"file.name":"beatcmd/beat.go","file.line":581},"message":"Build info","service.name":"apm-server","system_info":{"build":{"commit":"8638b035d700e5e85e376252402b5375e4d4190b","time":"2023-02-13T13:01:54.000Z","version":"8.6.2"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.629+0800","log.logger":"beat","log.origin":{"file.name":"beatcmd/beat.go","file.line":584},"message":"Go runtime info","service.name":"apm-server","system_info":{"go":{"os":"linux","arch":"amd64","max_procs":4,"version":"go1.18.10"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.629+0800","log.origin":{"file.name":"beatcmd/maxprocs.go","file.line":68},"message":"maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.653+0800","log.logger":"beat","log.origin":{"file.name":"beatcmd/beat.go","file.line":588},"message":"Host info","service.name":"apm-server","system_info":{"host":{"architecture":"x86_64","boot_time":"2023-03-17T09:36:24+08:00","containerized":false,"name":"elastic","ip":["127.0.0.1/8","::1/128","10.19.183.100/24","fe80::66ed:2199:a5a4:6ec3/64"],"kernel_version":"5.19.0-35-generic","mac":["18:03:73:98:0b:a4","ac:72:89:eb:13:c4"],"os":{"type":"linux","family":"debian","platform":"ubuntu","name":"Ubuntu","version":"22.04.2 LTS (Jammy Jellyfish)","major":22,"minor":4,"patch":2,"codename":"jammy"},"timezone":"CST","timezone_offset_sec":28800,"id":"d3bfd0e7a0174635b185d14f08ebc716"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.654+0800","log.logger":"beat","log.origin":{"file.name":"beatcmd/beat.go","file.line":617},"message":"Process info","service.name":"apm-server","system_info":{"process":{"capabilities":{"inheritable":null,"permitted":null,"effective":null,"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"ambient":null},"cwd":"/home/elastic/apm-server-8.6.2-linux-x86_64","exe":"/home/elastic/apm-server-8.6.2-linux-x86_64/apm-server","name":"apm-server","pid":6195,"ppid":6136,"seccomp":{"mode":"disabled","no_new_privs":false},"start_time":"2023-03-17T09:51:50.500+0800"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.675+0800","log.logger":"beater","log.origin":{"file.name":"beater/http.go","file.line":142},"message":"Listening on: 127.0.0.1:8200","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.675+0800","log.origin":{"file.name":"beatcmd/beat.go","file.line":391},"message":"apm-server started.","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.681+0800","log.logger":"beater","log.origin":{"file.name":"beater/beater.go","file.line":195},"message":"no cgroups detected, falling back to total system memory","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.681+0800","log.logger":"beater","log.origin":{"file.name":"beater/beater.go","file.line":214},"message":"MaxConcurrentDecoders set to 490 based on 3.8gb of memory","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.727+0800","log.logger":"beater","log.origin":{"file.name":"beater/beater.go","file.line":701},"message":"modelindexer.EventBufferSize set to 3927 based on 3.8gb of memory","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.727+0800","log.logger":"beater","log.origin":{"file.name":"beater/beater.go","file.line":715},"message":"modelindexer.MaxRequests set to 15 based on 3.8gb of memory","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.729+0800","log.logger":"beater","log.origin":{"file.name":"beater/waitready.go","file.line":40},"message":"blocking ingestion until all preconditions are satisfied","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.729+0800","log.logger":"beater","log.origin":{"file.name":"apm-server/main.go","file.line":104},"message":"creating transaction metrics aggregation with config: {Interval:1m0s MaxTransactionGroups:10000 HDRHistogramSignificantFigures:2}","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.737+0800","log.logger":"beater","log.origin":{"file.name":"apm-server/main.go","file.line":119},"message":"creating service destinations aggregation with config: {Interval:1m0s MaxGroups:10000}","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.737+0800","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":133},"message":"Path / added to request handler","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.738+0800","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":133},"message":"Path /config/v1/agents added to request handler","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.738+0800","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":133},"message":"Path /config/v1/rum/agents added to request handler","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.738+0800","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":133},"message":"Path /intake/v2/rum/events added to request handler","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.738+0800","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":133},"message":"Path /intake/v3/rum/events added to request handler","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-03-17T09:51:51.738+0800","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":133},"message":"Path /intake/v2/events added to request handler","service.name":"apm-server","ecs.version":"1.6.0"}

May I ask how to correctly setup the test APM Server talking to this test ElasticSearch?

Paulo
  • 8,690
  • 5
  • 20
  • 34
PatPanda
  • 3,644
  • 9
  • 58
  • 154

2 Answers2

3

Tldr;

There are 2 ways to run APM.

  • The fleet agent
  • The stand alone server (legacy)

It seems you are going for the later. Anyway, both needs you to enable the APM integration.

Starting in version 8.0.0, Fleet uses the APM integration to set up and manage APM index templates, ILM policies, and ingest pipelines. APM Server will only send data to Elasticsearch after the APM integration has been installed.

Manual solution:

  1. Open Kibana and select Add integrations > Elastic APM.
  2. Click APM integration.
  3. Click Add Elastic APM.
  4. Click Save and continue

Programmatic solution (without SSL):

version: '3.8'
services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        echo "Waiting for Elasticsearch availability";
        until curl -s http://elasticsearch:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST  -u "elastic:change_me" -H "Content-Type: application/json" http://elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"change_me\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/elasticsearch/elasticsearch.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120
  # Final storage: Elasticsearch
  elasticsearch:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - path.logs=/var/log/
      - cluster.name=elasticsearch
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTIC_PASSWORD=change_me
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.authc.api_key.enabled=true
    ports:
      - 9200:9200
    healthcheck:
        test: 
          [
            "CMD-SHELL", 
            "curl -s -I http://localhost:9200/_cluster/health || exit 1"
          ]
        interval: 10s
        timeout: 10s
        retries: 120

  # Kibana to display monitoring data
  kibana:
    depends_on:
      elasticsearch:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
      - ELASTIC_APM_ACTIVE=true
      - ELASTIC_APM_SERVER_URL=http://apm-server:8200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=change_me
    ports:
      - 5601:5601
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  # Kibana to display monitoring data
  apm-server:
    image: docker.elastic.co/apm/apm-server:${STACK_VERSION}
    container_name: apm-server
    ports:
      - 8200:8200
    command: >
       apm-server -e
         -E apm-server.rum.enabled=true
         -E setup.kibana.host=kibana:5601
         -E setup.template.settings.index.number_of_replicas=0
         -E apm-server.kibana.enabled=true
         -E apm-server.kibana.host=kibana:5601
         -E apm-server.kibana.protocol=http
         -E output.elasticsearch.hosts=["http://elasticsearch:9200"]
         -E apm-server.kibana.username=elastic
         -E apm-server.kibana.password=change_me
         -E output.elasticsearch.username=elastic
         -E output.elasticsearch.password=change_me
    healthcheck:
      interval: 10s
      retries: 120
      test: curl -I --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null https://localhost:8200/
    depends_on:
      elasticsearch:
        condition: service_healthy
volumes:
  certs:
    driver: local

Programmatic solution (with SSL):

docker-compose.yml

version: '3.8'
services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: elasticsearch\n"\
          "    dns:\n"\
          "      - elasticsearch\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: kibana\n"\
          "    dns:\n"\
          "      - kibana\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: apm-server\n"\
          "    dns:\n"\
          "      - apm-server\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://elasticsearch:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:change_me" -H "Content-Type: application/json" https://elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"change_me\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/elasticsearch/elasticsearch.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120
  # Final storage: Elasticsearch
  elasticsearch:
    depends_on:
      setup:
        condition: service_healthy
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - path.logs=/var/log/
      - cluster.name=elasticsearch
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - ELASTIC_PASSWORD=change_me
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/elasticsearch/elasticsearch.key
      - xpack.security.http.ssl.certificate=certs/elasticsearch/elasticsearch.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/elasticsearch/elasticsearch.key
      - xpack.security.transport.ssl.certificate=certs/elasticsearch/elasticsearch.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=basic
    ports:
      - 9200:9200
    healthcheck:
        test: 
          [
            "CMD-SHELL", 
            "curl -s -I --cacert config/certs/ca/ca.crt https://localhost:9200/_cluster/health || exit 1"
          ]
        interval: 10s
        timeout: 10s
        retries: 120

  # Kibana to display monitoring data
  kibana:
    depends_on:
      elasticsearch:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=https://elasticsearch:9200
      - ELASTIC_APM_ACTIVE=true
      - ELASTIC_APM_SERVER_URL=https://apm-server:8200
      - ELASTIC_APM_VERIFY_SERVER_CERT=false
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=change_me
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
      - SERVER_SSL_CERTIFICATE=config/certs/kibana/kibana.crt
      - SERVER_SSL_KEY=config/certs/kibana/kibana.key
      - SERVER_SSL_ENABLED=true
    ports:
      - 5601:5601
    volumes:
      - certs:/usr/share/kibana/config/certs
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I --cacert config/certs/ca/ca.crt https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  # Kibana to display monitoring data
  apm-server:
    image: docker.elastic.co/apm/apm-server:${STACK_VERSION}
    container_name: apm-server
    ports:
      - 8200:8200
    command: >
       apm-server -e
         -E apm-server.rum.enabled=true
         -E setup.kibana.host=kibana:5601
         -E setup.template.settings.index.number_of_replicas=0
         -E apm-server.kibana.enabled=true
         -E apm-server.kibana.host=kibana:5601
         -E apm-server.kibana.protocol=https
         -E apm-server.kibana.username=elastic
         -E apm-server.kibana.password=change_me
         -E apm-server.kibana.ssl.certificate_authorities=config/certs/ca/ca.crt
         -E apm-server.kibana.ssl.verification_mode=none
         -E output.elasticsearch.hosts=["https://elasticsearch:9200"]
         -E output.elasticsearch.ssl.certificate_authorities=config/certs/ca/ca.crt
         -E output.elasticsearch.ssl.verification_mode=none
         -E output.elasticsearch.username=elastic
         -E output.elasticsearch.password=change_me
         -E apm-server.ssl.enabled=true
         -E apm-server.ssl.certificate=config/certs/apm-server/apm-server.crt
         -E apm-server.ssl.key=config/certs/apm-server/apm-server.key
         -E apm-server.ssl.verification_mode=none
    volumes:
      - certs:/usr/share/apm-server/config/certs
    healthcheck:
      interval: 10s
      retries: 120
      test: curl -I --cacert config/certs/ca/ca.crt --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null https://localhost:8200/
    depends_on:
      elasticsearch:
        condition: service_healthy
volumes:
  certs:
    driver: local

kibana.yml

server.host: 0.0.0.0
status.allowAnonymous: true
monitoring.ui.container.elasticsearch.enabled: true
telemetry.enabled: false
xpack.security.encryptionKey: fhjskloppd678ehkdfdlliverpoolfcr
xpack.encryptedSavedObjects.encryptionKey: fhjskloppd678ehkdfdlliverpoolfcr

xpack.fleet.packages:
  - name: apm
    version: latest

The kibana.yml is going to ask kibana to enable the apm automatically upon starting.

Paulo
  • 8,690
  • 5
  • 20
  • 34
  • Hello @Paulo, thank you for this clear sample. You are correct, I wish to use the "The stand alone server (legacy)" hope you do not mind. As I mentioned in my question, I wish to work on a stack that is insecure, meaning, no username password, no certificate. I understand the risks. The sample you mentioned has APM working. However, everything, from elastic to kibana to apm are secured. Do you have the non-secure version please? – PatPanda Mar 20 '23 at 06:46
  • Hi PatPanda, I don't. But I assume you just need to drop the set up. Remove all the security settings and set xpack.security.enabled: false in elasticsearch.yml and kibana.yml – Paulo Mar 20 '23 at 08:08
  • Thank you for the coment. I did try that and the apm server will not work ```"publish_ready": false```. if I just "remove all the security settings", I will be back to square one, which apm server not working (reason why i asked the question in the first place) – PatPanda Mar 20 '23 at 08:11
  • You may find some insight in the logs of kibana, I suppose disabling the security has an impact of kibana successful setting up the integration. – Paulo Mar 20 '23 at 08:27
  • unfortunately, nothing that help me solve it. I might be missing something hence the question. If you happen to know and have an example, will be glad to award the bounty – PatPanda Mar 20 '23 at 08:48
  • To my surprise https is not mandatory. xpack still is. – Paulo Mar 22 '23 at 17:37
  • Yes, basically, it is very complicated to have the flow insecure even if one understands the risks. Hence this question on SO – PatPanda Mar 22 '23 at 23:19
0

Worked with Legacy APM server, all in K8s and in version 8.6.1

  1. Open Kibana
  2. Go to Fleet -> Agent policies
  3. Create new agent policy (any custom name is fine)
  4. Add APM Integration

It will have no purpose since you won't choose any agent to is it, except it creates all necessary indices, templates, etc. Probably, it can be deleted afterwards.