本站教程收集整理的这篇文章主要介绍了Docker容器中Elasticsearch 2.4.0中的root用户,本站教程本站觉得挺不错的,现在分享给大家,也给大家做个参考。
我正在使用Docker运行ELK堆栈以进行日志管理,当前配置为ES 1.7,Logstash 1.5.4和Kibana 4.1.4.现在我尝试将Elasticsearch升级到2.4.0,在https://download.elastic.co/elasticsearch/release/org/elasticsearch/diStribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz中使用targer使用tar.gz文件.由于ES 便宜香港vps 2.X不允许以root用户身份运行,我已经使用过
-Des.insecure.allow.root=true
运行elasticsearch服务时选项,但我的容器无法启动.日志没有提到任何问题.
% @R_10_10586@l % Received % Xferd Average Speed Time Time Time Current
Dload Upload @R_10_10586@l Spent Left Speed
100 874 100 874 0 0 874k 0 --:--:-- --:--:-- --:--:-- 853k
//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found
scheduler@0.0.0 start /opt/log-management/scheduler
node scheduler-app.js
ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper
node app.js
Jobs are registered
[2016-09-28 09:04:24,646][INFO ][bootstrap ] max_open_files [1048576]
[2016-09-28 09:04:24,686][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower perfoRMANce,but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] version[2.4.0],pid[1],build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] initializing ...
Wed,28 Sep 2016 09:04:24 GMT express deprecated app.configure: check app.get('env') in an if statement at lib/express/index.js:60:5
Wed,28 Sep 2016 09:04:24 GMT connect deprecated multipart: use parser (multiparty,busboy,formidablE) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Wed,28 Sep 2016 09:04:24 GMT connect deprecated limit: ReStrict request size at LOCATIOn of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-28 09:04:25,399][INFO ][plugins ] [Kismet Deadly] modules [reindex,lang-expression,lang-groovy],plugins [],sites []
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] using [1] data paths,mounts [[/data (/dev/mapper/platform-data)]],net usable_space [1tb],net @R_10_10586@l_space [1tb],spins? [possibly],types [xfs]
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] heap size [7.8gb],compressed ordinary object pointers [true]
[2016-09-28 09:04:25,455][WARN ][threadpool ] [Kismet Deadly] requested thread pool size [60] for [index] is too large; setTing to maximum [24] instead
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] initialized
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] starTing ...
[2016-09-28 09:04:27,695][INFO ][transport ] [Kismet Deadly] publish_address {10.240.118.68:9300},bound_addresses {[::1]:9300},{127.0.0.1:9300}
[2016-09-28 09:04:27,700][INFO ][discovery ] [Kismet Deadly] ccs-elasticsearch/q2Sv4FUFROGIdIWJrNENVA
任何线索将不胜感激.
编辑1:作为// opt //日志管理// elasticsearch / bin / elasticsearch:第134行:hostname:command not found是一个错误而docker镜像没有hostname实用程序,我尝试使用uname -ncommand来获取HOSTNAME Es.现在它不会抛出主机名错误,但问题仍然存在.它没有开始.
它是否正确使用?
还有一个疑问,当我使用的是当前正在运行的ES 1.7时,主机名实用程序也不在那里,但它运行没有任何问题.非常困惑.
使用uname -n后记录:
% @R_10_10586@l % Received % Xferd Average Speed Time Time Time Current
Dload Upload @R_10_10586@l Spent Left Speed
100 1083 100 1083 0 0 1093k 0 --:--:-- --:--:-- --:--:-- 1057k
> ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper
> node app.js
> scheduler@0.0.0 start /opt/log-management/scheduler
> node scheduler-app.js
Jobs are registered
[2016-09-30 10:10:37,785][INFO ][bootstrap ] max_open_files [1048576]
[2016-09-30 10:10:37,822][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower perfoRMANce,but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-30 10:10:37,993][INFO ][node ] [Helleyes] version[2.4.0],build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-30 10:10:37,993][INFO ][node ] [Helleyes] initializing ...
Fri,30 Sep 2016 10:10:38 GMT express deprecated app.configure: check app.get('env') in an if statement at lib/express/index.js:60:5
Fri,30 Sep 2016 10:10:38 GMT connect deprecated multipart: use parser (multiparty,formidablE) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Fri,30 Sep 2016 10:10:38 GMT connect deprecated limit: ReStrict request size at LOCATIOn of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-30 10:10:38,435][INFO ][plugins ] [Helleyes] modules [reindex,sites []
[2016-09-30 10:10:38,455][INFO ][env ] [Helleyes] using [1] data paths,types [xfs]
[2016-09-30 10:10:38,456][INFO ][env ] [Helleyes] heap size [7.8gb],compressed ordinary object pointers [true]
[2016-09-30 10:10:38,483][WARN ][threadpool ] [Helleyes] requested thread pool size [60] for [index] is too large; setTing to maximum [24] instead
[2016-09-30 10:10:40,151][INFO ][node ] [Helleyes] initialized
[2016-09-30 10:10:40,152][INFO ][node ] [Helleyes] starTing ...
[2016-09-30 10:10:40,278][INFO ][transport ] [Helleyes] publish_address {10.240.118.68:9300},{127.0.0.1:9300}
[2016-09-30 10:10:40,283][INFO ][discovery ] [Helleyes] ccs-elasticsearch/wvVGkhxnTqaa_wS5GGjZBQ
[2016-09-30 10:10:40,360][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x329b2977,/172.17.0.15:53388 => /10.240.118.69:9300]],closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.messageChAnnelHandler.handleException(messageChAnnelHandler.java:179)
at org.elasticsearch.transport.netty.messageChAnnelHandler.handlerResponseError(messageChAnnelHandler.java:174)
at org.elasticsearch.transport.netty.messageChAnnelHandler.messageReceived(messageChAnnelHandler.java:122)
at org.jboss.netty.chAnnel.SimpleChAnnelUpstreamHandler.handleUpstream(SimpleChAnnelUpstreamHandler.java:70)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:564)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline$DefaultChAnnelHandlerContext.sendUpstream(DefaultChAnnelPipeline.java:791)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFiremessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.chAnnel.SimpleChAnnelUpstreamHandler.handleUpstream(SimpleChAnnelUpstreamHandler.java:70)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:564)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:559)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:268)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:255)
at org.jboss.netty.chAnnel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioSELEctor.run(AbstractNioSELEctor.java:337)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.chAnnel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:40,360][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xdf31e5e6,/172.17.0.15:46846 => /10.240.118.70:9300]],closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.messageChAnnelHandler.handleException(messageChAnnelHandler.java:179)
at org.elasticsearch.transport.netty.messageChAnnelHandler.handlerResponseError(messageChAnnelHandler.java:174)
at org.elasticsearch.transport.netty.messageChAnnelHandler.messageReceived(messageChAnnelHandler.java:122)
at org.jboss.netty.chAnnel.SimpleChAnnelUpstreamHandler.handleUpstream(SimpleChAnnelUpstreamHandler.java:70)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:564)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline$DefaultChAnnelHandlerContext.sendUpstream(DefaultChAnnelPipeline.java:791)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFiremessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.chAnnel.SimpleChAnnelUpstreamHandler.handleUpstream(SimpleChAnnelUpstreamHandler.java:70)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:564)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:559)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:268)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:255)
at org.jboss.netty.chAnnel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioSELEctor.run(AbstractNioSELEctor.java:337)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.chAnnel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:41,798][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xcff0b2b6,/172.17.0.15:46958 => /10.240.118.70:9300]],800][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0xb47caaf6,/172.17.0.15:53501 => /10.240.118.69:9300]],closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.messageChAnnelHandler.handleException(messageChAnnelHandler.java:179)
at org.elasticsearch.transport.netty.messageChAnnelHandler.handlerResponseError(messageChAnnelHandler.java:174)
at org.elasticsearch.transport.netty.messageChAnnelHandler.messageReceived(messageChAnnelHandler.java:122)
at org.jboss.netty.chAnnel.SimpleChAnnelUpstreamHandler.handleUpstream(SimpleChAnnelUpstreamHandler.java:70)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:564)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline$DefaultChAnnelHandlerContext.sendUpstream(DefaultChAnnelPipeline.java:791)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFiremessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.chAnnel.SimpleChAnnelUpstreamHandler.handleUpstream(SimpleChAnnelUpstreamHandler.java:70)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:564)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:559)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:268)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:255)
at org.jboss.netty.chAnnel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioSELEctor.run(AbstractNioSELEctor.java:337)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.chAnnel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:43,302][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x6247aa3f,/172.17.0.15:47057 => /10.240.118.70:9300]],303][WARN ][transport.netty ] [Helleyes] exception caught on transport layer [[id: 0x1d266aa0,/172.17.0.15:53598 => /10.240.118.69:9300]],closing connection
java.lang.NullPointerException
at org.elasticsearch.transport.netty.messageChAnnelHandler.handleException(messageChAnnelHandler.java:179)
at org.elasticsearch.transport.netty.messageChAnnelHandler.handlerResponseError(messageChAnnelHandler.java:174)
at org.elasticsearch.transport.netty.messageChAnnelHandler.messageReceived(messageChAnnelHandler.java:122)
at org.jboss.netty.chAnnel.SimpleChAnnelUpstreamHandler.handleUpstream(SimpleChAnnelUpstreamHandler.java:70)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:564)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline$DefaultChAnnelHandlerContext.sendUpstream(DefaultChAnnelPipeline.java:791)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFiremessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.chAnnel.SimpleChAnnelUpstreamHandler.handleUpstream(SimpleChAnnelUpstreamHandler.java:70)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:564)
at org.jboss.netty.chAnnel.DefaultChAnnelPipeline.sendUpstream(DefaultChAnnelPipeline.java:559)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:268)
at org.jboss.netty.chAnnel.ChAnnels.firemessageReceived(ChAnnels.java:255)
at org.jboss.netty.chAnnel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioSELEctor.run(AbstractNioSELEctor.java:337)
at org.jboss.netty.chAnnel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
java.util.concurrent.ThreadPoolExecutor中的$Worker.run(ThreadPoolExecutor.java:617)
????????在java.lang.Thread.run(Thread.java:745)
????[2016-09-30 10:10:44,807] [INFO] [cluster.service] [Helleyes] new_master {Helleyes} {wvVGkhxnTqaa_wS5GGjZBQ} {10.240.118.68} {10.240.118.68:9300},原因:zen-disco-join (elections_as_master,[0]加入收到)
????[2016-09-30 10:10:44,852] [INFO] [http] [Helleyes] publish_address {10.240.118.68:9200},bound_addresses {[:: 1]:9200},{127.0.0.1:9200}
????[2016-09-30 10:10:44,852] [INFO] [节点] [Helleyes]开始了
????[2016-09-30 10:10:44,984] [INFO] [网关] [Helleyes]将[32]索引恢复到cluster_state
部署失败后出错
Failed: [10.240.118.68] (item={u'url': u'http://10.240.118.68:9200'}) => {"content": "","Failed": true,"item": {"url": "http://10.240.118.68:9200"},"msg": "Status code was not [200]: request Failed:
编辑2:即使安装了hostnameutility并且工作正常,容器也无法启动.日志与EDIT 1相同.
编辑3:容器确实启动但在地址http:// nodeip:9200处无法访问.在3个节点中,只有1个有2.4个其他2个仍然有1.7个而2.4不是集群的一部分.在运行2.4的容器内部,curl到localhost:9200给出了运行的elasticsearch结果,但是从外部无法访问.
编辑4:我尝试在集群上运行ES 2.4的基本安装,在同样的设置ES 1.7工作正常.我已经运行了ES迁移插件来检查群集是否可以运行ES 2.4并且它给了我绿色.基本安装细节如下
Dockerfile
#Pulling SLES12 thin base image
FROM private-registry-1
#Author
MAINTAINER XYZ
# Pre-requisite - Adding repositories
RUN zypper ar private-registry-2
RUN zypper --no-gpg-checks -n refresh
#Install required packages and dependencies
RUN zypper -n in net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1
#Downloading elasticsearch executable
ENV ES_VERSION=2.4.0
ENV ES_DIR="//opt//log-management//elasticsearch"
ENV ES_CONfig_PATH="${ES_DIR}//config"
ENV ES_REST_PORT=9200
ENV ES_INTERNAL_COM_PORT=9300
workdir /opt/log-management
RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate
RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \
&& rm ${ES_DIR}-${ES_VERSION}.tar.gz \
&& mv ${ES_DIR}-${ES_VERSION} ${ES_DIR}
#Exposing elasticsearch server container port to the HOST
EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT}
#Removing binary files which are not needed
RUN zypper -n rm wget
# Removing zypper repos
RUN zypper rr caspiancs_common
#Running elasticsearch executable
workdir ${ES_DIR}
ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true
用.构建
docker build -t es-test .
1)当使用docker run -d –name elasticsearch –net = host -p 9200:9200 -p 9300:9300 es-test时,如其中一条评论中所述,并在容器或节点内卷曲localhost:9200正在运行容器,我得到了正确的响应.我仍然无法到达9200端口上的群集的其他节点.
2)当使用docker run -d –name elasticsearch -p 9200:9200 -p 9300:9300 es-test并在容器内卷曲localhost:9200时,它工作正常,但不在节??点给我错误
curl: (56) Recv failure: Connection reset by peer
我仍然无法到达9200端口上的群集的其他节点.
编辑5:使用this answer on this question,我得到了运行ES 2.4的三个容器中的三个.但ES无法与所有这三个容器组成一个集群.网络配置如下
network.host:0.0.0.0,http.port:9200,
#configure elasticsearch.yml for clustering
echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONfig_PATH}/elasticsearch.yml
使用docker日志获取的日志有以下内容:
[2016-10-06 12:31:28,887][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2016-10-06 12:31:29,080][INFO ][node ] [Screech] version[2.4.0],build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-06 12:31:29,081][INFO ][node ] [Screech] initializing ...
[2016-10-06 12:31:29,652][INFO ][plugins ] [Screech] modules [reindex,sites []
[2016-10-06 12:31:29,684][INFO ][env ] [Screech] using [1] data paths,mounts [[/ (rootfs)]],net usable_space [8.7gb],net @R_10_10586@l_space [9.7gb],spins? [unkNown],types [rootfs]
[2016-10-06 12:31:29,684][INFO ][env ] [Screech] heap size [989.8mb],compressed ordinary object pointers [true]
[2016-10-06 12:31:29,720][WARN ][threadpool ] [Screech] requested thread pool size [60] for [index] is too large; setTing to maximum [5] instead
[2016-10-06 12:31:31,387][INFO ][node ] [Screech] initialized
[2016-10-06 12:31:31,387][INFO ][node ] [Screech] starTing ...
[2016-10-06 12:31:31,456][INFO ][transport ] [Screech] publish_address {172.17.0.16:9300},bound_addresses {[::]:9300}
[2016-10-06 12:31:31,465][INFO ][discovery ] [Screech] ccs-elasticsearch/YeO41MBIR3uqzZzISwalmw
[2016-10-06 12:31:34,500][WARN ][discovery.zen ] [Screech] Failed to connect to master [{Bobster}{Gh-6yBggRIypr7OuW1tXhA}{172.17.0.15}{172.17.0.15:9300}],retrying...
ConnectTransportException[[Bobster][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connecttochAnnels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.Transportservice.connectToNode(Transportservice.java:260)
at org.elasticsearch.discovery.zen.Zendiscovery.joinElectedMaster(Zendiscovery.java:444)
at org.elasticsearch.discovery.zen.Zendiscovery.innerJoinCluster(Zendiscovery.java:396)
at org.elasticsearch.discovery.zen.Zendiscovery.access$4400(Zendiscovery.java:96)
at org.elasticsearch.discovery.zen.Zendiscovery$JoinThReadControl$1.run(Zendiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
at sun.nio.ch.socketChAnnelImpl.checkConnect(Native Method)
每当我提到运行该容器的主机的IP地址为network.host时,我最终会遇到旧情况,即只有一个容器运行ES 2.4,另外两个容器运行1.7.
刚刚看到docker代理正在收听9300或“我认为”正在收听.
elasticsearch-server/src/main/docker # netstat -nlp | grep 9300
tcp 0 0 :::9300 :::* LISTEN 6656/docker-proxy
这有什么线索?
最佳答案
我能够使用以下设置形成群集
network.publish_host = CONTAINER_HOSt_address即容器正在运行的节点的地址.
network.bind_host = 0.0.0.0
transport.publish_port = 9300
transport.publish_host = CONTAINER_HOSt_address
当您在代理/负载均衡器(如Nginx或haproxy)之后运行ES时,tranport.publish_port非常重要.
本站总结
以上是本站教程为你收集整理的Docker容器中Elasticsearch 2.4.0中的root用户全部内容,希望文章能够帮你解决Docker容器中Elasticsearch 2.4.0中的root用户所遇到的程序开发问题。
如果觉得本站教程网站内容还不错,欢迎将本站教程推荐给好友。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。