ISheep

ISheep

Badminton | Coding | Writing | INTJ
github

Installing various environments with Docker

Manage containers with Portainer

Install Redis#

First, go to the official website to download the redis.conf file and edit it.#

Modify the redis.conf configuration file:
The main configurations are as follows:

 bind 127.0.0.1 # Comment out this part to allow external access to Redis
 daemonize no # Start Redis as a daemon thread (setting it to yes will cause Redis to stop immediately after starting)
 requirepass your_password # Set a password for Redis
 appendonly yes # Enable Redis persistence, default is no
 tcp-keepalive 300 # Prevent the error "remote host closed an existing connection" with a default value of 300

Create a directory for mapping between local and Docker, i.e., the local storage location.#

Create a local storage location for Redis;

You can customize it. Since some of my Docker configuration files are stored in the /mydata directory, I will create a Redis directory under /mydata for easy management in the future.
mkdir /data/redis
mkdir /data/redis/data
Copy the configuration file to the newly created directory.

File authorization#

chmod 777 redis.conf

Start Redis#

docker run -p 6379:6379 --name redis -v /mydata/redis/redis.conf:/etc/redis/redis.conf  -v /mydata/redis/data:/data -d redis redis-server /etc/redis/redis.conf --appendonly yes

Parameter explanation:

-p 6379:6379: Map the container's 6379 port to the host's 6379 port
-v /data/redis/redis.conf:/etc/redis/redis.conf: Place the host's configured redis.conf in this location inside the container
-v /data/redis/data:/data: Display the persisted data of Redis in the host for data backup
redis-server /etc/redis/redis.conf: This is a key configuration that allows Redis to start with the configuration in redis.conf
–appendonly yes: Enable data persistence after Redis starts

Install Elasticsearch 7.9.3#

Kibana is chosen to be installed locally (to avoid consuming server resources)

  1. Pull the image
docker pull elasticsearch:7.9.3
  1. Create the required folders and files
mkdir -p /mydata/elasticsearch/config
mkdir -p /mydata/elasticsearch/data
echo "http.host: 0.0.0.0">>/mydata/elasticsearch/config/elasticsearch.yml
  1. Assign permissions to the folders
chmod -R 777 /mydata/elasticsearch/
  1. Create and start the elasticsearch container
docker run --name elasticsearch -p 9200:9200 \
 -p 9300:9300 \
 -e "discovery.type=single-node" \
 -e ES_JAVA_OPTS="-Xms64m -Xmx128m" \
 -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
 -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \
 -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
 -d elasticsearch:7.9.3
  1. Set the container to start automatically
docker update elasticsearch --restart=always
  1. Install the IK Chinese word segmentation plugin
cd /mydata/elasticsearch/plugins/
wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.9.3/elasticsearch-analysis-ik-7.9.3.zip
mkdir ik
unzip -d ik/ elasticsearch-analysis-ik-7.9.3.zip 
docker restart elasticsearch
  1. Open the port number
firewall-cmd --zone=public --add-port=9200/tcp --permanent
systemctl  restart firewalld.service

Install Kafka and Zookeeper#

Guide on Juejin
Install Kafka

docker run  -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=server_ip:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://server_ip:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -e KAFKA_HEAP_OPTS="-Xmx256M -Xms128M" -t wurstmeister/kafka
Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.