Spark SQL大数据处理并写入Elasticsearch

2023-06-20,,

SparkSQL(Spark用于处理结构化数据的模块)

通过SparkSQL导入的数据可以来自MySQL数据库、Json数据、Csv数据等,通过load这些数据可以对其做一系列计算

下面通过程序代码来详细查看SparkSQL导入数据并写入到ES中:

数据集:北京市PM2.5数据

Spark版本:2.3.2

Python版本:3.5.2

mysql-connector-java-8.0.11 下载

ElasticSearch:6.4.1

Kibana:6.4.1

elasticsearch-spark-20_2.11-6.4.1.jar 下载

具体代码:

 # coding: utf-8
import sys
import os pre_current_dir = os.path.dirname(os.getcwd())
sys.path.append(pre_current_dir)
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql.functions import udf
from settings import ES_CONF current_dir = os.path.dirname(os.path.realpath(__file__)) spark = SparkSession.builder.appName("weather_result").getOrCreate() def get_health_level(value):
"""
PM2.5对应健康级别
:param value:
:return:
"""
if 0 <= value <= 50:
return "Very Good"
elif 50 < value <= 100:
return "Good"
elif 100 < value <= 150:
return "Unhealthy for Sensi"
elif value <= 200:
return "Unhealthy"
elif 200 < value <= 300:
return "Very Unhealthy"
elif 300 < value <= 500:
return "Hazardous"
elif value > 500:
return "Extreme danger"
else:
return None def get_weather_result():
"""
获取Spark SQL分析后的数据
:return:
"""
# load所需字段的数据到DF
df_2017 = spark.read.format("csv") \
.option("header", "true") \
.option("inferSchema", "true") \
.load("file://{}/data/Beijing2017_PM25.csv".format(current_dir)) \
.select("Year", "Month", "Day", "Hour", "Value", "QC Name") # 查看Schema
df_2017.printSchema() # 通过udf将字符型health_level转换为column
level_function_udf = udf(get_health_level, StringType()) # 新建列healthy_level 并healthy_level分组
group_2017 = df_2017.withColumn(
"healthy_level", level_function_udf(df_2017['Value'])
).groupBy("healthy_level").count() # 新建列days和percentage 并计算它们对应的值
result_2017 = group_2017.select("healthy_level", "count") \
.withColumn("days", group_2017['count'] / 24) \
.withColumn("percentage", group_2017['count'] / df_2017.count())
result_2017.show() return result_2017 def write_result_es():
"""
将SparkSQL计算结果写入到ES
:return:
"""
result_2017 = get_weather_result()
# ES_CONF配置 ES的node和index
result_2017.write.format("org.elasticsearch.spark.sql") \
.option("es.nodes", "{}".format(ES_CONF['ELASTIC_HOST'])) \
.mode("overwrite") \
.save("{}/pm_value".format(ES_CONF['WEATHER_INDEX_NAME'])) write_result_es()
spark.stop()

将mysql-connector-java-8.0.11和elasticsearch-spark-20_2.11-6.4.1.jar放到Spark的jars目录下,提交spark任务即可。

注意:

(1) 如果提示:ClassNotFoundException Failed to find data source: org.elasticsearch.spark.sql.,则表示spark没有发现jar包,此时需重新编译pyspark:

cd /opt/spark-2.3.2-bin-hadoop2.7/python
python3 setup.py sdist
pip install dist/*.tar.gz

(2) 如果提示:Multiple ES-Hadoop versions detected in the classpath; please use only one ,

  则表示ES-Hadoop jar包有多余的,可能既有elasticsearch-hadoop,又有elasticsearch-spark,此时删除多余的jar包,重新编译pyspark 即可

执行效果:

更多源码请关注我的github, https://github.com/a342058040/Spark-for-Python ,Spark相关技术全程用python实现,持续更新

Spark SQL大数据处理并写入Elasticsearch的相关教程结束。

《Spark SQL大数据处理并写入Elasticsearch.doc》

下载本文的Word格式文档,以方便收藏与打印。