Skip to main content

python mongodb elk Apm project

Tiny python project





building the application

For that project, we will build an app that will allow us to create update and delete data in a mongodb database.  
Prerequisite:  Docker  
                        python and pip 
  
To install docker, you can follow the steps in these links : https://docs.docker.com/engine/install/

 We need docker in order to create the MongoDB container and many of the tools that we are going to use in this project are containers. We will use  many self-host tools                  
                       
For the first part of our project, we need a MongoDB database and our python app. It is easier to build a MongoDB container. So on your machine just run the command 

docker run   --name mongo-database -p 27017:27017  -d  mongo

This will create and start the MongoDB container name mongo-database and expose port 27017 of the container to port 27017 of the host.  Our python app will connect to this database and store the datas 's

Let's create the python application. In the app python servver, create the file mongo.py and paste the following code 



from flask import Flask, request, jsonify
from flask_pymongo import PyMongo
from bson.objectid import ObjectId
import socket
app = Flask(__name__)


app.config["MONGO_URI"] = "mongodb://DOCKER_SERVER_IP_ADDRESS:27017/db_test"

mongo = PyMongo(app)
db = mongo.db
@app.route("/")
def index():
    hostname = socket.gethostname()
        message="This app is running in host {} pod!".format(hostname)
    return jsonify(
    )
@app.route("/tasks")
def get_all_tasks():
    tasks = db.task.find()
    data = []
    for task in tasks:
        item = {
            "id": str(task["_id"]),
            "task": task["task"]
        }
        data.append(item)
    return jsonify(
        data=data
    )
@app.route("/task", methods=["POST"])
def create_task():
    data = request.get_json(force=True)
    db.task.insert_one({"task": data["task"]})
    return jsonify(
        message="Task saved successfully!"
    )

@app.route("/tasks/delete", methods=["POST"])
def delete_all_tasks():
    db.task.remove()
    return jsonify(
        message="All Tasks deleted!"
    )
if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)
 

 The app will listen on port 5000. we can test the app  by running these command 

curl  http://APP_PYTHON_IP_ADDRESS:5000/tasks
curl -X POST -d "{\"task\": \"Etudiant  40\"}" http://APP_PYTHON_IP_ADDRESS:5000/task

 

Monitoring your application

You can monitor your application with kibana and elasticsearch. You can see the traces and the transactions related to your requests. You have to build an elk stack and configure a fleet server. For this part, you can follow the instructions in this link

Elastic APM has built-in support for flask. You can use these steps for flask-RESTful and flask-RESTplus


you have to install the elastic APM agent on your app python server with the command :

pip install elastic-apm[flask]

We will have to add some modifications to our code. 

from flask import Flask, request, jsonify
from flask_pymongo import PyMongo
from elasticapm.contrib.flask import ElasticAPM
from bson.objectid import ObjectId
import socket
app = Flask(__name__)

app.config['ELASTIC_APM'] = {
  'SERVICE_NAME': 'Devops-Python-Steve',
  'SECRET_TOKEN': '',
  'SERVER_URL': 'http://FLEET_SERVER_IP_ADDRESS:8200',
}


app.config["MONGO_URI"] = "mongodb://DOCKER_SERVER_IP_ADDRESS:27017/db_test"

mongo = PyMongo(app)
db = mongo.db
@app.route("/")
def index():
    hostname = socket.gethostname()
    return jsonify(
        message="This app is running in host {} pod!".format(hostname)
    )

@app.route("/tasks")
def get_all_tasks():
    tasks = db.task.find()
    data = []
    for task in tasks:
        item = {
            "id": str(task["_id"]),
            "task": task["task"]
        }
        data.append(item)
    return jsonify(
        data=data
    )
@app.route("/task", methods=["POST"])
def create_task():
    data = request.get_json(force=True)
    db.task.insert_one({"task": data["task"]})
    return jsonify(
        message="Task saved successfully!"
    )


@app.route("/tasks/delete", methods=["POST"])
def delete_all_tasks():
    db.task.remove()
    return jsonify(
        message="All Tasks deleted!"
    )
if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)
    apm = ElasticAPM(app)



After running your app and making some requests, you can see the traces in your kibana. For that, you can go on the kibana server, click on the APM menu and in the service table




Deploying you application with docker.

To deploy your application you can use a docker container, which means you will have both your database and your app running as containers. You can use Kubernetes to manage the deployment and the high availability. In our example, we will just deploy the app as a container.

Let's create a Dockerfile to build the image of our container


FROM ubuntu:20.04
RUN  apt-get update && \
     apt-get install -y python-is-python3 python3-pip && \
     pip install Flask \
     pip install elastic-apm[flask] \
     pip install flask_pymongo
COPY mongo.py .
ENTRYPOINT ["python", "mongo.py"]

you can build the image by going to the directory where you saved your mongo.py file and run 

 docker build -t python_mongo .

and you can create the container with the command 

docker run   --name mongo-python_test  -p 5000:5000 -d  python_mongo

Comments

Popular posts from this blog

Observability with grafana and prometheus (SSO configutation with active directory)

How to Set Up Grafana Single Sign-On (SSO) with Active Directory (AD) Grafana is a powerful tool for monitoring and visualizing data. Integrating it with Active Directory (AD) for Single Sign-On (SSO) can streamline access and enhance security. This tutorial will guide you through the process of configuring Grafana with AD for SSO. Prerequisites Active Directory Domain : Ensure you have an AD domain set up. Domain: bazboutey.local AD Server IP: 192.168.170.212 Users: grafana (for binding AD) user1 (to demonstrate SSO) we will end up with a pattern like this below Grafana Installed : Install Grafana on your server. Grafana Server IP: 192.168.179.185 Administrator Privileges : Access to modify AD settings and Grafana configurations. Step 1: Configure AD for LDAP Integration Create a Service Account in AD: Open Active Directory Users and Computers. Create a user (e.g., grafana ). Assign this user a strong password (e.g., Grafana 123$ ) and ensure it doesn’t expire. Gather Required AD D...

Deploying a Scalable Monitoring Stack Lab on AWS using Terraform and Ansible

Deploying a Scalable Monitoring Stack Lab on AWS using Terraform and Ansible Introduction Effective monitoring is a cornerstone of cloud infrastructure management, ensuring high availability and performance. This guide provides a professional walkthrough on deploying Prometheus , Grafana , and Node Exporter on AWS using Terraform for infrastructure provisioning and Ansible for configuration management. This lab will create a prometheus server and a grafana server, It will install node exporter on both server. You should be able to see the metrics in grafana, we already install a node exporter dashboard for the user. The diagram below will give you an idea of what the architecture will look like If you want to replicate this lab, you can find the complete code repository here: GitHub - MireCloud Terraform Infra .  Infrastructure Setup with Terraform 1. Creating a Dedicated VPC To ensure isolation, we define a VPC named Monitoring with a CIDR block of 10.0.0.0/16 . reso...

Building a Static Website on AWS with Terraform

The Journey to a Fully Automated Website Deployment A few weeks ago, I found myself needing to deploy a simple static website . Manually setting up an S3 bucket, configuring permissions, and linking it to a CloudFront distribution seemed like a tedious process. As someone who loves automation, I decided to leverage Terraform to simplify the entire process. Why Terraform? Infrastructure as Code (IaC) is a game-changer. With Terraform, I could:  Avoid manual setup errors  Easily reproduce and  Automate security best practices Instead of clicking through AWS settings, I wrote a few Terraform scripts and deployed everything in minutes. Let me walk you through how I did it!  Architecture Overview The architecture consists of three main components: User:  The end user accesses the website via a CloudFront URL.  CloudFront Distribution:  Acts as a content delivery network (CDN) to distribute content efficiently, reduce latency, and enhance security. It ...