Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No data in #883

Open
ryebridge opened this issue Apr 4, 2024 · 7 comments
Open

No data in #883

ryebridge opened this issue Apr 4, 2024 · 7 comments

Comments

@ryebridge
Copy link

I have installed the helm chart from the Kube Prometheus stack here:

https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md

....then I added:

https://github.com/prometheus-community/elasticsearch_exporter

...and updated the elastic-prometheus-elasticsearch-exporter deployment with the following options:

    - --log.format=logfmt
    - --log.level=info
    - --es.uri=https://admin:[email protected]:9200
    - --es.all
    - --es.indices
    - --es.ssl-skip-verify
    - --es.indices_settings
    - --es.indices_mappings
    - --es.shards
    - --collector.snapshots
    - --es.timeout=30s
    - --web.listen-address=:9108
    - --web.telemetry-path=/metrics
    image: quay.io/prometheuscommunity/elasticsearch-exporter:v1.7.0

...and when I check the pod logs it seems to be collecting data:

level=info ts=2024-04-04T09:37:36.260266299Z caller=clusterinfo.go:214 msg="triggering initial cluster info call"
level=info ts=2024-04-04T09:37:36.260317077Z caller=clusterinfo.go:183 msg="providing consumers with updated cluster info label"
level=info ts=2024-04-04T09:37:36.271143372Z caller=main.go:244 msg="started cluster info retriever" interval=5m0s
level=info ts=2024-04-04T09:37:36.271525105Z caller=tls_config.go:274 msg="Listening on" address=[::]:9108
level=info ts=2024-04-04T09:37:36.271545007Z caller=tls_config.go:277 msg="TLS is disabled." http2=false address=[::]:9108
level=info ts=2024-04-04T09:42:36.260458556Z caller=clusterinfo.go:183 msg="providing consumers with updated cluster info

....but when I log into Promethus, I can't see anything related to elastic. Am I missing some additional configuruation ?

Thanks for any tips in advance.

Regards,
John

@sysadmind
Copy link
Contributor

I suspect what is happening here is that your prometheus is not configured to scrape the exporter.
Some things you can check:

  • Check the /metrics endpoint on the exporter. Do you see elasticsearch metrics? If the answer is yes, the exporter is working.
  • Do you see the exporter listed in the targets in the prometheus dashboard? If not, prometheus is not scraping the metrics. I think the kube-prometheus-stack uses a CRD for adding targets. I think these are the docs but not positive: https://prometheus-operator.dev/docs/user-guides/scrapeconfig/

@ryebridge
Copy link
Author

Thanks so much for replying, I've been trying to to this to work for a few days now. Can you please explain how I could check the /metrics endpoint on the exporter ?

I logged into the "kube-prometheus-stack-grafana" pod and did a curl against the "pgexporter-prometheus-postgres-exporter" service IP address which accrording to the env variable KUBE_PROMETHEUS_STACK_KUBE_STATE_METRICS_PORT_8080_TCP_PORT is 8080 but it's not able to connect at all.

I can't see the exporter as a target in the dashboard at all, this is what I have for the prometheuses.monitoring.coreos.com CRD:

scrapeConfigSelector:
  matchLabels:
    release: kube-prometheus-stack

...do I need to create a "ScrapeConfig" it suggests here ? I don't see any ScrapeConfig objects in the "kube-prometheus-stack" namespace ?

https://medium.com/@helia.barroso/a-guide-to-service-discovery-with-prometheus-operator-how-to-use-pod-monitor-service-monitor-6a7e4e27b303

@sysadmind
Copy link
Contributor

For your first question about checking the exporter pod: I think you're conflating the connection to prometheus with the connection to the exporter. The environment variable you mention is for kube-prometheus-stack, not the elasticsearch-exporter. In the command args you originally mention as well as the logs from the exporter, the exporter is listening on port 9108. I think you want something similar to this: curl pgexporter-prometheus-postgres-exporter:9108/metrics

For the scrape configs, I think I linked you to the wrong section. Try here: https://prometheus-operator.dev/docs/user-guides/getting-started/. That talks about using a ServiceMonitor to monitor a kubernetes service. There is also a PodMonitor if you don't have a service.

Here's an example:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: example-app
  labels:
    team: frontend
spec:
  selector:
    matchLabels:
      app: example-app
  endpoints:
  - port: web

@ryebridge
Copy link
Author

ryebridge commented Apr 4, 2024

Thanks Joe,

Really appreciate you helping me out here, I must be missing a step :-(

I'm getting a little confused between exporters and service monitors. I intially tried to set up a service monitor against the elastic service but coudn't see any metrics in Prometheus so I assumed the alternative was to use an exporter and configure the connection in the deployment.

Here's my first attempt using a service monitor against the elastic service.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  annotations:
    meta.helm.sh/release-namespace: kube-prometheus-stack
  labels:
    app: prometheus
    release: prometheus
  name: my-es-monitor
  namespace: kube-prometheus-stack
spec:
  endpoints:
  - interval: 30s
    path: /metrics
    scrapeTimeout: 20s
    targetPort: 9108
  namespaceSelector:
    matchNames:
    - my-platform
  selector:
    matchLabels:
      app.kubernetes.io/name: opensearch

kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: opensearch-logs-data
  labels:
    app.kubernetes.io/component: opensearch-logs-data
    app.kubernetes.io/instance: opensearch-logs-data
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: opensearch
    app.kubernetes.io/version: 1.3.13
    helm.sh/chart: opensearch-1.23.1
  name: opensearch-logs-data
  namespace: my-platform
spec:
  clusterIP: xx.xx.xxx.xxx
  clusterIPs:
  - xx.xx.xxx.xxx
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 9200
    protocol: TCP
    targetPort: 9200
  - name: transport
    port: 9300
    protocol: TCP
    targetPort: 9300
  selector:
    app.kubernetes.io/instance: opensearch-logs-data
    app.kubernetes.io/name: opensearch
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

@sysadmind
Copy link
Contributor

So you have prometheus - this is your database. It stores the metrics and you can query it. It also scrapes metrics from exporters.

Exporters - these are things that expose metrics. They are often translators of data. In this case elasticsearch_exporter takes data from elasticsearch and exposes it as prometheus metrics. By itself the exporter only exposes metrics over HTTP(s).

The kube-prometheus-stack glues a bunch of stuff together to make many pieces work together. The service monitor is a way to tell prometheus about kubernetes services to monitor.

What you have in your last comment looks okay to me, but I'm not an expert. If you still don't have the target in prometheus, it's probably something with the config for kube-prometheus-stack. I think this is the repo for that: https://github.com/prometheus-operator/prometheus-operator

You could also try the #prometheus channel in the CNCF slack. That might be more fruitful for kube-prometheus-stack issues.

@ryebridge
Copy link
Author

Thanks again, I'll give that a try :)

@ryebridge
Copy link
Author

Hey again, I exposed the elastic exporter service as a nodePort service and confirmed it's getting the metrics but I still can't get them into Promtheus :-( From reading futher, seems ServiceMonitor is required to avoid having to manually add a new scrape into the Prometheus config.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants