持續更新 請關註: /blogs/zorkelvll/articles/2018/11/02/1541172452468
?本文主要是介紹大數據基礎設施軟件Hadoop-Scala-Spark的安裝過程,以macOS、linux等系統環境為例進行實踐!
壹、背景
二、實踐-環境安裝(macOS)
後添加
(4)配置core-site.xmlhdfs地址和端口:vim /usr/local/Cellar/hadoop/3.0.0/libexec/etc/hadoop/core-site.xml => 添加配置
並且建立文件夾 mkdir /usr/local/Cellar/hadoop/hdfs & mkdir /usr/local/Cellar/hadoop/hdfs/tmp
先備份:cp /usr/local/Cellar/hadoop/3.0.0/libexec/etc/hadoop/mapred-site.xml mapred-site-bak.xml
再編輯:vim /usr/local/Cellar/hadoop/3.0.0/libexec/etc/hadoop/mapred-site.xml => 添加配置
(7)格式化hdfs文件系統格式:hdfs namenode -format
(8)啟動及關閉hadoop服務:
/usr/local/Cellar/hadoop/3.0.0/libexec/start-dfs.sh => 守護進程:namenodes、datanodes、secondary namenodes,瀏覽器中訪問 http://localhost:9870 ,註意端口號是9870而不是50070
/usr/local/Cellar/hadoop/3.0.0/libexec/start-yarn.sh => yarn服務進程:resourcemanager、nodemanagers,瀏覽器中訪問 http://localhost:8088 和 http://localhost:8042
/usr/local/Cellar/hadoop/3.0.0/libexec/stop-yarn.sh
/usr/local/Cellar/hadoop/3.0.0/libexec/stop-dfs.sh
註意:brew方式安裝的hadoop3.0.0,需要配置的hadoop路徑是libexec下的,否則start-dfs.sh命令會報錯“error:cannot execute hdfs-config”
以上是hadoop-scala-spark在mac下的安裝過程,為昨天在mac下首次實踐,壹次性成功 => 希望能夠對各位同學有所幫助,和得到各位同學的後續關註,如果疑問或者遇到的坑,歡迎在文章下面留言!!
spark開啟之路 : https://spark.apache.org/docs/latest/quick-start.html