Introduction to jdbc hive
Hive JDBC driver allows the user to manage data in Hadoop via Business intelligence applications through JDBC support. The driver performs this by converting calls from the application to SQL and transit the SQL query to the beneath Hive engine. Mostly, the developers use drivers in Hive to develop desktop, mobile, and web applications to communicate with live data from Hive. The drivers in Hive JDBC have a similar architecture to MySQL drivers and OLEDB drivers, there is no change even in their Resultset objects, connection, and statements. The Hive JDBC is briefly explained in this article.
What is JDBC Hive?
The JDBC Hive driver is one of the components present in the hive client along with the ODBC driver and thrift server. The Hive driver is used to associate the connection between Java application and Hive. The ODBC driver enables the application to connect Hive and ODBC protocol. The features of Hive are managing multiple data format files, accessing SQL data, access to files from HBase and HDFS, query execution via Tez, map-reduce, language support, retrieval of query in LLAP. Hive offers a driver and command-line tool for data operations.
How to use JDBC Hive?
The working of Hive is simple. In the Hive client end, the application and the Hive driver are connected to the Hive server in Hive services. Then the Hive server is connected to a common driver where all types of file systems can be accessed from the Hadoop cluster and Hive meta-store database which is present in the Hive storage area and compute.
Connection URL- JDBC Hive:
The hive supports connection using URL strings like another database.
::hive1:/ip-address::port to link other remote applications and Hive
The remote applications can be Java, Scala, Spark, and Python
In the previous version, the syntax is jdbc::hive://
The installation is done in JAR files which are required to manage Hive through JDBC.
The below syntax can automatically save the JAR files
hive-jdbc-<<version name>>.jar or hive-services-<<version name>>.jar
When the server for Hive management is configured, then the user should provide the driver class name for JDBC, URL for the database, credentials of the client. Every component should be specified to establish a connection to the SQL database.
The server configuration file to access Hive via JDBC, it should be changed in the below file,
The value of serverhive authentication is given in serverhive.authentication and its impersonation is provided in the hive.serverhive.enable.doas properties.
Though the services of Hive are utilizing the Kerberos authentication or not, it informs the configuration of other server properties. These properties are defined in the above-mentioned .xml config file in Hadoop. The user can change its properties by editing this file.
Use the user identity option to view who is accessing the data from the given server.
To make remote access of Python, Scala and Java or any programming language, ensure the user has HiveServer2service
It is located in the directory, $Hive_Home \ bin
educba@name:~/hive /bin $ ./hiverserver2
Then the hiveserver2 is initiated.
To connect Java and Scala to Hive, then execute Hive QL from the mvnrepository library. It is dependent on Gradel or Maven. In case, if it is Maven, the user can choose an artefact on the pom.xml file. The version of the artefact and Hive version must be the same to avoid errors.
The new driver class org.hive.jdbc.HiveJDBCDriver
This can work with Hiveserver2 also.
In case if the user is using the previous version, he can choose to work on
The connection string should be jdbc::hive//
Connection Hive from Java
The simple commands to access Hive from Java are below. It associates the default database and Hive which are interconnected.
To load the specific Hive use,
To make connection,
To get the object statement, use
To execute the query use,
stmt.executequery(“name of query”)
To return to the object connection,
jdbc::hive: // 192.168.1.10000 / default.
Hive from Scala:
To access Hive from Scala, import all the required packages like java.sql. exception,
Hive JDBC client extends Application
value driver name= “org.hadoop.hive.Hivedriver”
class. forname (“EDUCBA”);
value connection = drivermanager. getconnection(“jdbc::hive: // 192.168.1.10000 / default.;)
value stmt = createstatement();
value tablename = “Educba HiveDriver Table”
stmt.queryexecute (“Class1 + Educba”);
value res = stmt.query (“New table” + class 1 + (“key int. value string”);
// select * query name
value sql = “select * from “ + table name;”
res = stmt.query(sql);
system.out.print(string.value)(result.get(1)) + ‘\t” + result.getstring(2)
standard hive query
val.sql = choose count(0) from + Educba;
result = stmt.query(sql);
system.out.println ( result.get string(1));
JDBC hive examples
The Hive has major components like WebHcat and Hcatalog. For storing the data to Hadoop and to enable capabilities of data processing, Hcatalog is engaged such a Pig and Map-reduce. WebHcat is enabled to use Map-reduce, the Hive jobs, and Pig. It can also be used for managing the operations at the Metadata store using REST API and manage all data types for conversion of data. The user can use the connector to fetch the data from Hive. The user can use the query in JDBC to submit a customized SQL query in Hive to fetch the result with help of the connector.
In the authentication NOSASL, the required configuration is jdbc.property.authentication = nosasL
If any user name is provided, then he can use jdbc.user.
There are many additional steps in configuration and it can be changed when it is required, by using the Kerberos authentication. To set a secured cluster in Hive, the user should add the directory comprising hive-site.xml to the client classpath.
The configuration can be changed in its XML file. The JDBC Hive is used in different cases and it can be implemented according to the requirement.
This is a guide to jdbc hive. Here we discuss How to use JDBC Hive along with the examples and Connection Hive from Java. You may also have a look at the following articles to learn more –