EDUCBA Logo

EDUCBA

MENUMENU
  • Explore
    • EDUCBA Pro
    • PRO Bundles
    • Featured Skills
    • New & Trending
    • Fresh Entries
    • Finance
    • Data Science
    • Programming and Dev
    • Excel
    • Marketing
    • HR
    • PDP
    • VFX and Design
    • Project Management
    • Exam Prep
    • All Courses
  • Blog
  • Enterprise
  • Free Courses
  • Log in
  • Sign Up
Home Data Science Data Science Tutorials DBMS Tutorial jdbc hive
 

jdbc hive

Updated July 6, 2023

jdbc hive

 

 

Introduction to jdbc hive

Hive JDBC driver allows the user to manage data in Hadoop via Business intelligence applications through JDBC support. The driver performs this by converting calls from the application to SQL and transit the SQL query to the beneath Hive engine. Mostly, the developers use drivers in Hive to develop desktop, mobile, and web applications to communicate with live data from Hive. The drivers in Hive JDBC have a similar architecture to MySQL drivers and OLEDB drivers, there is no change even in their Resultset objects, connection, and statements. The Hive JDBC is briefly explained in this article.

Watch our Demo Courses and Videos

Valuation, Hadoop, Excel, Mobile Apps, Web Development & many more.

What is JDBC Hive?

The JDBC Hive driver is one of the components present in the hive client along with the ODBC driver and thrift server. The Hive driver is used to associate the connection between Java application and Hive. The ODBC driver enables the application to connect Hive and ODBC protocol. The features of Hive are managing multiple data format files, accessing SQL data, access to files from HBase and HDFS, query execution via Tez, map-reduce, language support, retrieval of query in LLAP. Hive offers a driver and command-line tool for data operations.

How to use JDBC Hive?

The working of Hive is simple. In the Hive client end, the application and the Hive driver are connected to the Hive server in Hive services. Then the Hive server is connected to a common driver where all types of file systems can be accessed from the Hadoop cluster and Hive meta-store database which is present in the Hive storage area and compute.

Connection URL- JDBC Hive:

The hive supports connection using URL strings like another database.

::hive1:/ip-address::port

to link other remote applications and Hive

The remote applications can be Java, Scala, Spark, and Python

In the previous version, the syntax is jdbc::hive://

The installation is done in JAR files which are required to manage Hive through JDBC.

The below syntax can automatically save the JAR files

hive-jdbc-<<version name>>.jar or hive-services-<<version name>>.jar

When the server for Hive management is configured, then the user should provide the driver class name for JDBC, URL for the database, credentials of the client. Every component should be specified to establish a connection to the SQL database.

The server configuration file to access Hive via JDBC, it should be changed in the below file,
jdbc-site.xml

jdbc:hive1.://serverhive_hostname::serverhive_port//<databasename>
org.hive.jdbc.HiveJDBCDriver

The value of serverhive authentication is given in serverhive.authentication and its impersonation is provided in the hive.serverhive.enable.doas properties.

Though the services of Hive are utilizing the Kerberos authentication or not, it informs the configuration of other server properties. These properties are defined in the above-mentioned .xml config file in Hadoop. The user can change its properties by editing this file.

Use the user identity option to view who is accessing the data from the given server.

HiveServer2

To make remote access of Python, Scala and Java or any programming language, ensure the user has HiveServer2service

It is located in the directory, $Hive_Home \ bin

educba@name:~/hive /bin $  ./hiverserver2

Then the hiveserver2 is initiated.

To connect Java and Scala to Hive, then execute Hive QL from the mvnrepository library. It is dependent on Gradel or Maven. In case, if it is Maven, the user can choose an artefact on the pom.xml file. The version of the artefact and Hive version must be the same to avoid errors.

The new driver class org.hive.jdbc.HiveJDBCDriver

This can work with Hiveserver2 also.

In case if the user is using the previous version, he can choose to work on

org.hadoop.hive.Hivedriver

The connection string should be jdbc::hive//

Connection Hive from Java

The simple commands to access Hive from Java are below. It associates the default database and Hive which are interconnected.

To load the specific Hive use,

Class.forName()

To make connection,

Drivermanager.getconnection()

To get the object statement, use

createstatement()

To execute the query use,

stmt.executequery("name of query")

To return to the object connection,

jdbc::hive: // 192.168.1.10000 / default.

Hive from Scala:

To access Hive from Scala, import all the required packages like java.sql. exception,

java.sql.connection,
java.sql.drivermanager;
java.sql.resultset;
java.sql.statement;
public class
{
Hive JDBC client extends Application
{
value driver name= "org.hadoop.hive.Hivedriver"
class. forname ("EDUCBA");
value connection = drivermanager. getconnection("jdbc::hive: // 192.168.1.10000 / default.;)
value stmt = createstatement();
value tablename = "Educba HiveDriver Table"
stmt.queryexecute ("Class1 + Educba");
value res = stmt.query ("New table" + class 1 + ("key int. value string");
// select * query name
value sql = "select * from “ + table name;"
res = stmt.query(sql);
while(result.next()
{
system.out.print(string.value)(result.get(1)) + '\t' + result.getstring(2)
}
standard hive query
val.sql = choose count(0) from + Educba;
result = stmt.query(sql);
while (result.next())
{
system.out.println ( result.get string(1));
}
}

JDBC hive examples

The Hive has major components like WebHcat and Hcatalog. For storing the data to Hadoop and to enable capabilities of data processing, Hcatalog is engaged such a Pig and Map-reduce. WebHcat is enabled to use Map-reduce, the Hive jobs, and Pig. It can also be used for managing the operations at the Metadata store using REST API and manage all data types for conversion of data. The user can use the connector to fetch the data from Hive. The user can use the query in JDBC to submit a customized SQL query in Hive to fetch the result with help of the connector.

In the authentication NOSASL, the required configuration is jdbc.property.authentication = nosasL

If any user name is provided, then he can use jdbc.user.

There are many additional steps in configuration and it can be changed when it is required, by using the Kerberos authentication. To set a secured cluster in Hive, the user should add the directory comprising hive-site.xml to the client classpath.

Conclusion

The configuration can be changed in its XML file. The JDBC Hive is used in different cases and it can be implemented according to the requirement.

Recommended Articles

We hope that this EDUCBA information on “jdbc hive” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

  1. Hive JDBC Driver
  2. JDBC Driver for Oracle
  3. JDBC Driver
  4. What is JDBC?

Primary Sidebar

Footer

Follow us!
  • EDUCBA FacebookEDUCBA TwitterEDUCBA LinkedINEDUCBA Instagram
  • EDUCBA YoutubeEDUCBA CourseraEDUCBA Udemy
APPS
EDUCBA Android AppEDUCBA iOS App
Blog
  • Blog
  • Free Tutorials
  • About us
  • Contact us
  • Log in
Courses
  • Enterprise Solutions
  • Free Courses
  • Explore Programs
  • All Courses
  • All in One Bundles
  • Sign up
Email
  • [email protected]

ISO 10004:2018 & ISO 9001:2015 Certified

© 2025 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you
Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you
EDUCBA
Free Data Science Course

Hadoop, Data Science, Statistics & others

By continuing above step, you agree to our Terms of Use and Privacy Policy.
*Please provide your correct email id. Login details for this Free course will be emailed to you
EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

EDUCBA Login

Forgot Password?

🚀 Limited Time Offer! - 🎁 ENROLL NOW