3/14/2023 0 Comments Focalpoint 1.1Second is to add each of the required jars separately. You can choose a bundled Hive jar according to the version of the metastore you use. First is to use Flink’s bundled Hive jars. There are two ways to add Hive dependencies. Or -l option for Table API program or SQL Client respectively.Īpache Hive is built on Hadoop, so you need to provide Hadoop dependencies, by setting the HADOOP_CLASSPATHĮnvironment variable: export HADOOP_CLASSPATH=`hadoop classpath` To make the integration work in Table API program or SQL in SQL Client.Īlternatively, you can put these dependencies in a dedicated folder, and add them to classpath with the -C To integrate with Hive, you need to add some extra dependencies to the /lib/ directory in Flink distribution
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |