hive-site.xml (0.14)

hive.exec.script.wrapper
Default:
hive.exec.plan
Default:
hive.plan.serialization.format Query plan format serialization between client and task nodes. Two supported values are : kryo and javaXML. Kryo is default.
Default:kryo
hive.exec.scratchdir HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.
Default:/tmp/hive
hive.exec.local.scratchdir Local scratch space for Hive jobs
Default:${system:java.io.tmpdir}/${system:user.name}
hive.downloaded.resources.dir Temporary local directory for added resources in the remote file system.
Default:${system:java.io.tmpdir}/${hive.session.id}_resources
hive.scratch.dir.permission The permission for the user specific scratch directories that get created.
Default:700
hive.exec.submitviachild
Default:false
hive.exec.submit.local.task.via.child Determines whether local tasks (typically mapjoin hashtable generation phase) runs in separate JVM (true recommended) or not. Avoids the overhead of spawning new JVM, but can lead to out-of-memory issues.
Default:true
hive.exec.script.maxerrsize Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task). This prevents runaway scripts from filling logs partitions to capacity
Default:100000
hive.exec.script.allow.partial.consumption When enabled, this option allows a user script to exit successfully without consuming all the data from the standard input.
Default:false
stream.stderr.reporter.prefix Streaming jobs that log to standard error with this prefix can log counter or status information.
Default:reporter:
stream.stderr.reporter.enabled Enable consumption of status and counter messages for streaming jobs.
Default:true
hive.exec.compress.output This controls whether the final outputs of a query (to a local/HDFS file or a Hive table) is compressed. The compression codec and other options are determined from Hadoop config variables mapred.output.compress*
Default:false
hive.exec.compress.intermediate This controls whether intermediate files produced by Hive between multiple map-reduce jobs are compressed. The compression codec and other options are determined from Hadoop config variables mapred.output.compress*
Default:false
hive.intermediate.compression.codec
Default:
hive.intermediate.compression.type
Default:
hive.exec.reducers.bytes.per.reducer size per reducer.The default is 256Mb, i.e if the input size is 1G, it will use 4 reducers.
Default:256000000
hive.exec.reducers.max max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks is negative, Hive will use this one as the max number of reducers when automatically determine number of reducers.
Default:1009
hive.exec.pre.hooks Comma-separated list of pre-execution hooks to be invoked for each statement. A pre-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
Default:
hive.exec.post.hooks Comma-separated list of post-execution hooks to be invoked for each statement. A post-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
Default:
hive.exec.failure.hooks Comma-separated list of on-failure hooks to be invoked for each statement. An on-failure hook is specified as the name of Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
Default:
hive.client.stats.publishers Comma-separated list of statistics publishers to be invoked on counters on each job. A client stats publisher is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface.
Default:
hive.exec.parallel Whether to execute jobs in parallel
Default:false
hive.exec.parallel.thread.number How many jobs at most can be executed in parallel
Default:8
hive.mapred.reduce.tasks.speculative.execution Whether speculative execution for reducers should be turned on.
Default:true
hive.exec.counters.pull.interval The interval with which to poll the JobTracker for the counters the running job. The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be.
Default:1000
hive.exec.dynamic.partition Whether or not to allow dynamic partitions in DML/DDL.
Default:true
hive.exec.dynamic.partition.mode In strict mode, the user must specify at least one static partition in case the user accidentally overwrites all partitions. In nonstrict mode all partitions are allowed to be dynamic.
Default:strict
hive.exec.max.dynamic.partitions Maximum number of dynamic partitions allowed to be created in total.
Default:1000
hive.exec.max.dynamic.partitions.pernode Maximum number of dynamic partitions allowed to be created in each mapper/reducer node.
Default:100
hive.exec.max.created.files Maximum number of HDFS files created by all mappers/reducers in a MapReduce job.
Default:100000
hive.exec.default.partition.name The default partition name in case the dynamic partition column value is null/empty string or any other values that cannot be escaped. This value must not contain any special character used in HDFS URI (e.g., ‘:’, ‘%’, ‘/’ etc). The user has to be aware that the dynamic partition value should not contain this value to avoid confusions.
Default:__HIVE_DEFAULT_PARTITION__
hive.lockmgr.zookeeper.default.partition.name
Default:__HIVE_DEFAULT_ZOOKEEPER_PARTITION__
hive.exec.show.job.failure.debug.info If a job fails, whether to provide a link in the CLI to the task with the most failures, along with debugging hints if applicable.
Default:true
hive.exec.job.debug.capture.stacktraces Whether or not stack traces parsed from the task logs of a sampled failed task for each failed job should be stored in the SessionState
Default:true
hive.exec.job.debug.timeout
Default:30000
hive.exec.tasklog.debug.timeout
Default:20000
hive.output.file.extension String used as a file extension for output files. If not set, defaults to the codec extension for text files (e.g. “.gz”), or no extension otherwise.
Default:
hive.exec.mode.local.auto Let Hive determine whether to run in local mode automatically
Default:false
hive.exec.mode.local.auto.inputbytes.max When hive.exec.mode.local.auto is true, input bytes should less than this for local mode.
Default:134217728
hive.exec.mode.local.auto.input.files.max When hive.exec.mode.local.auto is true, the number of tasks should less than this for local mode.
Default:4
hive.exec.drop.ignorenonexistent Do not report an error if DROP TABLE/VIEW specifies a non-existent table/view
Default:true
hive.ignore.mapjoin.hint Ignore the mapjoin hint
Default:true
hive.file.max.footer maximum number of lines for footer user can define for a table file
Default:100
hive.resultset.use.unique.column.names Make column names unique in the result set by qualifying column names with table alias if needed. Table alias will be added to column names for queries of type “select *” or if query explicitly uses table alias “select r1.x..”.
Default:true
fs.har.impl The implementation for accessing Hadoop Archives. Note that this won’t be applicable to Hadoop versions less than 0.20
Default:org.apache.hadoop.hive.shims.HiveHarFileSystem
hive.metastore.warehouse.dir location of default database for the warehouse
Default:/user/hive/warehouse
hive.metastore.uris Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.
Default:
hive.metastore.connect.retries Number of retries while opening a connection to metastore
Default:3
hive.metastore.failure.retries Number of retries upon failure of Thrift metastore calls
Default:1
hive.metastore.client.connect.retry.delay Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Number of seconds for the client to wait between consecutive connection attempts
Default:1s
hive.metastore.client.socket.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. MetaStore Client socket timeout in seconds
Default:600s
javax.jdo.option.ConnectionPassword password to use against metastore database
Default:mine
hive.metastore.ds.connection.url.hook Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used
Default:
javax.jdo.option.Multithreaded Set this to true if multiple threads access metastore through JDO concurrently.
Default:true
javax.jdo.option.ConnectionURL JDBC connect string for a JDBC metastore
Default:jdbc:derby:;databaseName=metastore_db;create=true
hive.hmshandler.retry.attempts The number of times to retry a HMSHandler call if there were a connection error.
Default:1
hive.hmshandler.retry.interval Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time between HMSHandler retry attempts on failure.
Default:1000ms
hive.hmshandler.force.reload.conf Whether to force reloading of the HMSHandler configuration (including the connection URL, before the next metastore query that accesses the datastore. Once reloaded, this value is reset to false. Used for testing only.
Default:false
hive.metastore.server.min.threads Minimum number of worker threads in the Thrift server’s pool.
Default:200
hive.metastore.server.max.threads Maximum number of worker threads in the Thrift server’s pool.
Default:100000
hive.metastore.server.tcp.keepalive Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections.
Default:true
hive.metastore.archive.intermediate.original Intermediate dir suffixes used for archiving. Not important what they are, as long as collisions are avoided
Default:_INTERMEDIATE_ORIGINAL
hive.metastore.archive.intermediate.archived
Default:_INTERMEDIATE_ARCHIVED
hive.metastore.archive.intermediate.extracted
Default:_INTERMEDIATE_EXTRACTED
hive.metastore.kerberos.keytab.file The path to the Kerberos Keytab file containing the metastore Thrift server’s service principal.
Default:
hive.metastore.kerberos.principal The service principal for the metastore Thrift server. The special string _HOST will be replaced automatically with the correct host name.
Default:hive-metastore/_HOST@EXAMPLE.COM
hive.metastore.sasl.enabled If true, the metastore Thrift interface will be secured with SASL. Clients must authenticate with Kerberos.
Default:false
hive.metastore.thrift.framed.transport.enabled If true, the metastore Thrift interface will use TFramedTransport. When false (default) a standard TTransport is used.
Default:false
hive.cluster.delegation.token.store.class The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster.
Default:org.apache.hadoop.hive.thrift.MemoryTokenStore
hive.cluster.delegation.token.store.zookeeper.connectString The ZooKeeper token store connect string. You can re-use the configuration value set in hive.zookeeper.quorum, by leaving this parameter unset.
Default:
hive.cluster.delegation.token.store.zookeeper.znode The root path for token store data. Note that this is used by both HiveServer2 and MetaStore to store delegation Token. One directory gets created for each of them. The final directory names would have the servername appended to it (HIVESERVER2, METASTORE).
Default:/hivedelegation
hive.cluster.delegation.token.store.zookeeper.acl ACL for token store entries. Comma separated list of ACL entries. For example: sasl:hive/host1@MY.DOMAIN:cdrwa,sasl:hive/host2@MY.DOMAIN:cdrwa Defaults to all permissions for the hiveserver2/metastore process user.
Default:
hive.metastore.cache.pinobjtypes List of comma separated metastore object types that should be pinned in the cache
Default:Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order
datanucleus.connectionPoolingType Specify connection pool library for datanucleus
Default:BONECP
datanucleus.validateTables validates existing schema against code. turn this on if you want to verify existing schema
Default:false
datanucleus.validateColumns validates existing schema against code. turn this on if you want to verify existing schema
Default:false
datanucleus.validateConstraints validates existing schema against code. turn this on if you want to verify existing schema
Default:false
datanucleus.storeManagerType metadata store type
Default:rdbms
datanucleus.autoCreateSchema creates necessary schema on a startup if one doesn’t exist. set this to false, after creating it once
Default:true
datanucleus.fixedDatastore
Default:false
hive.metastore.schema.verification Enforce metastore schema version consistency. True: Verify that version information stored in metastore matches with one from Hive jars. Also disable automatic schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures proper metastore schema migration. (Default) False: Warn if the version information stored in metastore doesn’t match with one from in Hive jars.
Default:false
datanucleus.autoStartMechanismMode throw exception if metadata tables are incorrect
Default:checked
datanucleus.transactionIsolation Default transaction isolation level for identity generation.
Default:read-committed
datanucleus.cache.level2 Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server
Default:false
datanucleus.cache.level2.type
Default:none
datanucleus.identifierFactory Name of the identifier factory to use when generating table/column names etc. ‘datanucleus1′ is used for backward compatibility with DataNucleus v1
Default:datanucleus1
datanucleus.rdbms.useLegacyNativeValueStrategy
Default:true
datanucleus.plugin.pluginRegistryBundleCheck Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]
Default:LOG
hive.metastore.batch.retrieve.max Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the client side.
Default:300
hive.metastore.batch.retrieve.table.partition.max Maximum number of table partitions that metastore internally retrieves in one batch.
Default:1000
hive.metastore.init.hooks A comma separated list of hooks to be invoked at the beginning of HMSHandler initialization. An init hook is specified as the name of Java class which extends org.apache.hadoop.hive.metastore.MetaStoreInitListener.
Default:
hive.metastore.pre.event.listeners List of comma separated listeners for metastore events.
Default:
hive.metastore.event.listeners
Default:
hive.metastore.authorization.storage.checks Should the metastore do authorization checks against the underlying storage (usually hdfs) for operations like drop-partition (disallow the drop-partition if the user in question doesn’t have permissions to delete the corresponding directory on the storage).
Default:false
hive.metastore.event.clean.freq Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Frequency at which timer task runs to purge expired events in metastore.
Default:0s
hive.metastore.event.expiry.duration Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Duration after which events expire from events table
Default:0s
hive.metastore.execute.setugi In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client’s reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.
Default:true
hive.metastore.partition.name.whitelist.pattern Partition names will be checked against this regex pattern and rejected if not matched.
Default:
hive.metastore.integral.jdo.pushdown Allow JDO query pushdown for integral partition columns in metastore. Off by default. This improves metastore perf for integral columns, especially if there’s a large number of partitions. However, it doesn’t work correctly with integral values that are not normalized (e.g. have leading zeroes, like 0012). If metastore direct SQL is enabled and works, this optimization is also irrelevant.
Default:false
hive.metastore.try.direct.sql Whether the Hive metastore should try to use direct SQL queries instead of the DataNucleus for certain read paths. This can improve metastore performance when fetching many partitions or column statistics by orders of magnitude; however, it is not guaranteed to work on all RDBMS-es and all versions. In case of SQL failures, the metastore will fall back to the DataNucleus, so it’s safe even if SQL doesn’t work for all queries on your datastore. If all SQL queries fail (for example, your metastore is backed by MongoDB), you might want to disable this to save the try-and-fall-back cost.
Default:true
hive.metastore.try.direct.sql.ddl Same as hive.metastore.try.direct.sql, for read statements within a transaction that modifies metastore data. Due to non-standard behavior in Postgres, if a direct SQL select query has incorrect syntax or something similar inside a transaction, the entire transaction will fail and fall-back to DataNucleus will not be possible. You should disable the usage of direct SQL inside transactions if that happens in your case.
Default:true
hive.metastore.disallow.incompatible.col.type.changes If true (default is false), ALTER TABLE operations which change the type of a column (say STRING) to an incompatible type (say MAP) are disallowed. RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the datatypes can be converted from string to any type. The map is also serialized as a string, which can be read as a string as well. However, with any binary serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions when subsequently trying to access old partitions. Primitive types like INT, STRING, BIGINT, etc., are compatible with each other and are not blocked. See HIVE-4409 for more details.
Default:false
hive.table.parameters.default Default property values for newly created tables
Default:
hive.ddl.createtablelike.properties.whitelist Table Properties to copy over when executing a Create Table Like.
Default:
hive.metastore.rawstore.impl Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store and retrieval of raw metadata objects such as table, database
Default:org.apache.hadoop.hive.metastore.ObjectStore
javax.jdo.option.ConnectionDriverName Driver class name for a JDBC metastore
Default:org.apache.derby.jdbc.EmbeddedDriver
javax.jdo.PersistenceManagerFactoryClass class implementing the jdo persistence
Default:org.datanucleus.api.jdo.JDOPersistenceManagerFactory
hive.metastore.expression.proxy
Default:org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore
javax.jdo.option.DetachAllOnCommit Detaches all objects from session so that they can be used after transaction is committed
Default:true
javax.jdo.option.NonTransactionalRead Reads outside of transactions
Default:true
javax.jdo.option.ConnectionUserName Username to use against metastore database
Default:APP
hive.metastore.end.function.listeners List of comma separated listeners for the end of metastore functions.
Default:
hive.metastore.partition.inherit.table.properties List of comma separated keys occurring in table properties which will get inherited to newly created partitions. * implies all the keys will get inherited.
Default:
hive.metadata.export.location When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported to the current user’s home directory on HDFS.
Default:
hive.metadata.move.exported.metadata.to.trash When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user’s trash directory alongside the dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.
Default:true
hive.cli.errors.ignore
Default:false
hive.cli.print.current.db Whether to include the current database in the Hive prompt.
Default:false
hive.cli.prompt Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive CLI startup.
Default:hive
hive.cli.pretty.output.num.cols The number of columns to use when formatting output generated by the DESCRIBE PRETTY table_name command. If the value of this property is -1, then Hive will use the auto-detected terminal width.
Default:-1
hive.metastore.fs.handler.class
Default:org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl
hive.session.id
Default:
hive.session.silent
Default:false
hive.session.history.enabled Whether to log Hive query, query plan, runtime statistics etc.
Default:false
hive.query.string Query being executed (might be multiple per a session)
Default:
hive.query.id ID for query being executed (might be multiple per a session)
Default:
hive.jobname.length max jobname length
Default:50
hive.jar.path The location of hive_cli.jar that is used when submitting jobs in a separate jvm.
Default:
hive.aux.jars.path The location of the plugin jars that contain implementations of user defined functions and serdes.
Default:
hive.reloadable.aux.jars.path Jars can be renewed by executing reload command. And these jars can be used as the auxiliary classes like creating a UDF or SerDe.
Default:
hive.added.files.path This an internal parameter.
Default:
hive.added.jars.path This an internal parameter.
Default:
hive.added.archives.path This an internal parameter.
Default:
hive.auto.progress.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How long to run autoprogressor for the script/UDTF operators. Set to 0 for forever.
Default:0s
hive.script.auto.progress Whether Hive Transform/Map/Reduce Clause should automatically send progress information to TaskTracker to avoid the task getting killed because of inactivity. Hive sends progress information when the script is outputting to stderr. This option removes the need of periodically producing stderr messages, but users should be cautious because this may prevent infinite loops in the scripts to be killed by TaskTracker.
Default:false
hive.script.operator.id.env.var Name of the environment variable that holds the unique script operator ID in the user’s transform function (the custom mapper/reducer that the user has specified in the query)
Default:HIVE_SCRIPT_OPERATOR_ID
hive.script.operator.truncate.env Truncate each environment variable for external script in scripts operator to 20KB (to fit system limits)
Default:false
hive.script.operator.env.blacklist Comma separated list of keys from the configuration file not to convert to environment variables when envoking the script operator
Default:hive.txn.valid.txns,hive.script.operator.env.blacklist
hive.mapred.mode The mode in which the Hive operations are being performed. In strict mode, some risky queries are not allowed to run. They include: Cartesian Product. No partition being picked up for a query. Comparing bigints and strings. Comparing bigints and doubles. Orderby without limit.
Default:nonstrict
hive.alias
Default:
hive.map.aggr Whether to use map-side aggregation in Hive Group By queries
Default:true
hive.groupby.skewindata Whether there is skew in data to optimize group by queries
Default:false
hive.optimize.multigroupby.common.distincts Whether to optimize a multi-groupby query with the same distinct. Consider a query like: from src insert overwrite table dest1 select col1, count(distinct colx) group by col1 insert overwrite table dest2 select col2, count(distinct colx) group by col2; With this parameter set to true, first we spray by the distinct value (colx), and then perform the 2 groups bys. This makes sense if map-side aggregation is turned off. However, with maps-side aggregation, it might be useful in some cases to treat the 2 inserts independently, thereby performing the query above in 2MR jobs instead of 3 (due to spraying by distinct key first). If this parameter is turned off, we don’t consider the fact that the distinct key is the same across different MR jobs.
Default:true
hive.join.emit.interval How many rows in the right-most join operand Hive should buffer before emitting the join result.
Default:1000
hive.join.cache.size How many rows in the joining tables (except the streaming table) should be cached in memory.
Default:25000
hive.cbo.enable Flag to control enabling Cost Based Optimizations using Calcite framework.
Default:false
hive.mapjoin.bucket.cache.size
Default:100
hive.mapjoin.optimized.hashtable Whether Hive should use memory-optimized hash table for MapJoin. Only works on Tez, because memory-optimized hashtable cannot be serialized.
Default:true
hive.mapjoin.optimized.keys Whether MapJoin hashtable should use optimized (size-wise), keys, allowing the table to take less memory. Depending on key, the memory savings for entire table can be 5-15% or so.
Default:true
hive.mapjoin.lazy.hashtable Whether MapJoin hashtable should deserialize values on demand. Depending on how many values in the table the join will actually touch, it can save a lot of memory by not creating objects for rows that are not needed. If all rows are needed obviously there’s no gain.
Default:true
hive.mapjoin.optimized.hashtable.wbsize Optimized hashtable (see hive.mapjoin.optimized.hashtable) uses a chain of buffers to store data. This is one buffer size. HT may be slightly faster if this is larger, but for small joins unnecessary memory will be allocated and then trimmed.
Default:10485760
hive.smbjoin.cache.rows How many rows with the same key value should be cached in memory per smb joined table.
Default:10000
hive.groupby.mapaggr.checkinterval Number of rows after which size of the grouping keys/aggregation classes is performed
Default:100000
hive.map.aggr.hash.percentmemory Portion of total memory to be used by map-side group aggregation hash table
Default:0.5
hive.mapjoin.followby.map.aggr.hash.percentmemory Portion of total memory to be used by map-side group aggregation hash table, when this group by is followed by map join
Default:0.3
hive.map.aggr.hash.force.flush.memory.threshold The max memory to be used by map-side group aggregation hash table. If the memory usage is higher than this number, force to flush data
Default:0.9
hive.map.aggr.hash.min.reduction Hash aggregation will be turned off if the ratio between hash table size and input rows is bigger than this number. Set to 1 to make sure hash aggregation is never turned off.
Default:0.5
hive.multigroupby.singlereducer Whether to optimize multi group by query to generate single M/R job plan. If the multi group by query has common group by keys, it will be optimized to generate single M/R job.
Default:true
hive.map.groupby.sorted If the bucketing/sorting properties of the table exactly match the grouping key, whether to perform the group by in the mapper by using BucketizedHiveInputFormat. The only downside to this is that it limits the number of mappers to the number of files.
Default:false
hive.map.groupby.sorted.testmode If the bucketing/sorting properties of the table exactly match the grouping key, whether to perform the group by in the mapper by using BucketizedHiveInputFormat. If the test mode is set, the plan is not converted, but a query property is set to denote the same.
Default:false
hive.groupby.orderby.position.alias Whether to enable using Column Position Alias in Group By or Order By
Default:false
hive.new.job.grouping.set.cardinality Whether a new map-reduce job should be launched for grouping sets/rollups/cubes. For a query like: select a, b, c, count(1) from T group by a, b, c with rollup; 4 rows are created per row: (a, b, c), (a, b, null), (a, null, null), (null, null, null). This can lead to explosion across map-reduce boundary if the cardinality of T is very high, and map-side aggregation does not do a very good job. This parameter decides if Hive should add an additional map-reduce job. If the grouping set cardinality (4 in the example above), is more than this value, a new MR job is added under the assumption that the original group by will reduce the data size.
Default:30
hive.udtf.auto.progress Whether Hive should automatically send progress information to TaskTracker when using UDTF’s to prevent the task getting killed because of inactivity. Users should be cautious because this may prevent TaskTracker from killing tasks with infinite loops.
Default:false
hive.default.fileformat Expects one of [textfile, sequencefile, rcfile, orc]. Default file format for CREATE TABLE statement. Users can explicitly override it by CREATE TABLE … STORED AS [FORMAT]
Default:TextFile
hive.query.result.fileformat Expects one of [textfile, sequencefile, rcfile]. Default file format for storing result of the query.
Default:TextFile
hive.fileformat.check Whether to check file format or not when loading data files
Default:true
hive.default.rcfile.serde The default SerDe Hive will use for the RCFile format
Default:org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe
hive.default.serde The default SerDe Hive will use for storage formats that do not specify a SerDe.
Default:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
hive.serdes.using.metastore.for.schema SerDes retriving schema from metastore. This an internal parameter. Check with the hive dev. team
Default:org.apache.hadoop.hive.ql.io.orc.OrcSerde, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe, org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe, org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe, org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe, org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe, org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
hive.querylog.location Location of Hive run time structured log file
Default:${system:java.io.tmpdir}/${system:user.name}
hive.querylog.enable.plan.progress Whether to log the plan’s progress every time a job’s progress is checked. These logs are written to the location specified by hive.querylog.location
Default:true
hive.querylog.plan.progress.interval Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The interval to wait between logging the plan’s progress. If there is a whole number percentage change in the progress of the mappers or the reducers, the progress is logged regardless of this value. The actual interval will be the ceiling of (this value divided by the value of hive.exec.counters.pull.interval) multiplied by the value of hive.exec.counters.pull.interval I.e. if it is not divide evenly by the value of hive.exec.counters.pull.interval it will be logged less frequently than specified. This only has an effect if hive.querylog.enable.plan.progress is set to true.
Default:60000ms
hive.script.serde The default SerDe for transmitting input data to and reading output data from the user scripts.
Default:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
hive.script.recordreader The default record reader for reading data from the user scripts.
Default:org.apache.hadoop.hive.ql.exec.TextRecordReader
hive.script.recordwriter The default record writer for writing data to the user scripts.
Default:org.apache.hadoop.hive.ql.exec.TextRecordWriter
hive.transform.escape.input This adds an option to escape special chars (newlines, carriage returns and tabs) when they are passed to the user script. This is useful if the Hive tables can contain data that contains special characters.
Default:false
hive.binary.record.max.length Read from a binary stream and treat each hive.binary.record.max.length bytes as a record. The last record before the end of stream can have less than hive.binary.record.max.length bytes
Default:1000
hive.hwi.listen.host This is the host address the Hive Web Interface will listen on
Default:0.0.0.0
hive.hwi.listen.port This is the port the Hive Web Interface will listen on
Default:9999
hive.hwi.war.file This sets the path to the HWI war file, relative to ${HIVE_HOME}.
Default:${env:HWI_WAR_FILE}
hive.mapred.local.mem mapper/reducer memory in local mode
Default:0
hive.mapjoin.smalltable.filesize The threshold for the input file size of the small tables; if the file size is smaller than this threshold, it will try to convert the common join into map join
Default:25000000
hive.sample.seednumber A number used to percentage sampling. By changing this number, user will change the subsets of data sampled.
Default:0
hive.test.mode Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.
Default:false
hive.test.mode.prefix In test mode, specfies prefixes for the output table
Default:test_
hive.test.mode.samplefreq In test mode, specfies sampling frequency for table, which is not bucketed, For example, the following query: INSERT OVERWRITE TABLE dest SELECT col1 from src would be converted to INSERT OVERWRITE TABLE test_dest SELECT col1 from src TABLESAMPLE (BUCKET 1 out of 32 on rand(1))
Default:32
hive.test.mode.nosamplelist In test mode, specifies comma separated table names which would not apply sampling
Default:
hive.test.dummystats.aggregator internal variable for test
Default:
hive.test.dummystats.publisher internal variable for test
Default:
hive.merge.mapfiles Merge small files at the end of a map-only job
Default:true
hive.merge.mapredfiles Merge small files at the end of a map-reduce job
Default:false
hive.merge.tezfiles Merge small files at the end of a Tez DAG
Default:false
hive.merge.size.per.task Size of merged files at the end of the job
Default:256000000
hive.merge.smallfiles.avgsize When the average output file size of a job is less than this number, Hive will start an additional map-reduce job to merge the output files into bigger files. This is only done for map-only jobs if hive.merge.mapfiles is true, and for map-reduce jobs if hive.merge.mapredfiles is true.
Default:16000000
hive.merge.rcfile.block.level
Default:true
hive.merge.orcfile.stripe.level When hive.merge.mapfiles, hive.merge.mapredfiles or hive.merge.tezfiles is enabled while writing a table with ORC file format, enabling this config will do stripe-level fast merge for small ORC files. Note that enabling this config will not honor the padding tolerance config (hive.exec.orc.block.padding.tolerance).
Default:true
hive.exec.rcfile.use.explicit.header If this is set the header for RCFiles will simply be RCF. If this is not set the header will be that borrowed from sequence files, e.g. SEQ- followed by the input and output RCFile formats.
Default:true
hive.exec.rcfile.use.sync.cache
Default:true
hive.io.rcfile.record.interval
Default:2147483647
hive.io.rcfile.column.number.conf
Default:0
hive.io.rcfile.tolerate.corruptions
Default:false
hive.io.rcfile.record.buffer.size
Default:4194304
hive.exec.orc.memory.pool Maximum fraction of heap that can be used by ORC file writers
Default:0.5
hive.exec.orc.write.format Define the version of the file to write. Possible values are 0.11 and 0.12. If this parameter is not defined, ORC will use the run length encoding (RLE) introduced in Hive 0.12. Any value other than 0.11 results in the 0.12 encoding.
Default:
hive.exec.orc.default.stripe.size Define the default ORC stripe size, in bytes.
Default:67108864
hive.exec.orc.default.block.size Define the default file system block size for ORC files.
Default:268435456
hive.exec.orc.dictionary.key.size.threshold If the number of keys in a dictionary is greater than this fraction of the total number of non-null rows, turn off dictionary encoding. Use 1 to always use dictionary encoding.
Default:0.8
hive.exec.orc.default.row.index.stride Define the default ORC index stride in number of rows. (Stride is the number of rows an index entry represents.)
Default:10000
hive.orc.row.index.stride.dictionary.check If enabled dictionary check will happen after first row index stride (default 10000 rows) else dictionary check will happen before writing first stripe. In both cases, the decision to use dictionary or not will be retained thereafter.
Default:true
hive.exec.orc.default.buffer.size Define the default ORC buffer size, in bytes.
Default:262144
hive.exec.orc.default.block.padding Define the default block padding, which pads stripes to the HDFS block boundaries.
Default:true
hive.exec.orc.block.padding.tolerance Define the tolerance for block padding as a decimal fraction of stripe size (for example, the default value 0.05 is 5% of the stripe size). For the defaults of 64Mb ORC stripe and 256Mb HDFS blocks, the default block padding tolerance of 5% will reserve a maximum of 3.2Mb for padding within the 256Mb block. In that case, if the available size within the block is more than 3.2Mb, a new smaller stripe will be inserted to fit within that space. This will make sure that no stripe written will cross block boundaries and cause remote reads within a node local task.
Default:0.05
hive.exec.orc.default.compress Define the default compression codec for ORC file
Default:ZLIB
hive.exec.orc.encoding.strategy Expects one of [speed, compression]. Define the encoding strategy to use while writing data. Changing this will only affect the light weight encoding for integers. This flag will not change the compression level of higher level compression codec (like ZLIB).
Default:SPEED
hive.exec.orc.compression.strategy Expects one of [speed, compression]. Define the compression strategy to use while writing data. This changes the compression level of higher level compression codec (like ZLIB).
Default:SPEED
hive.orc.splits.include.file.footer If turned on splits generated by orc will include metadata about the stripes in the file. This data is read remotely (from the client or HS2 machine) and sent to all the tasks.
Default:false
hive.orc.cache.stripe.details.size Cache size for keeping meta info about orc splits cached in the client.
Default:10000
hive.orc.compute.splits.num.threads How many threads orc should use to create splits in parallel.
Default:10
hive.exec.orc.skip.corrupt.data If ORC reader encounters corrupt data, this value will be used to determine whether to skip the corrupt data or throw exception. The default behavior is to throw exception.
Default:false
hive.exec.orc.zerocopy Use zerocopy reads with ORC. (This requires Hadoop 2.3 or later.)
Default:false
hive.lazysimple.extended_boolean_literal LazySimpleSerde uses this property to determine if it treats ‘T’, ‘t’, ‘F’, ‘f’, ‘1’, and ‘0’ as extened, legal boolean literal, in addition to ‘TRUE’ and ‘FALSE’. The default is false, which means only ‘TRUE’ and ‘FALSE’ are treated as legal boolean literal.
Default:false
hive.optimize.skewjoin Whether to enable skew join optimization. The algorithm is as follows: At runtime, detect the keys with a large skew. Instead of processing those keys, store them temporarily in an HDFS directory. In a follow-up map-reduce job, process those skewed keys. The same key need not be skewed for all the tables, and so, the follow-up map-reduce job (for the skewed keys) would be much faster, since it would be a map-join.
Default:false
hive.auto.convert.join Whether Hive enables the optimization about converting common join into mapjoin based on the input file size
Default:true
hive.auto.convert.join.noconditionaltask Whether Hive enables the optimization about converting common join into mapjoin based on the input file size. If this parameter is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than the specified size, the join is directly converted to a mapjoin (there is no conditional task).
Default:true
hive.auto.convert.join.noconditionaltask.size If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. However, if it is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than this size, the join is directly converted to a mapjoin(there is no conditional task). The default is 10MB
Default:10000000
hive.auto.convert.join.use.nonstaged For conditional joins, if input stream from a small alias can be directly applied to join operator without filtering or projection, the alias need not to be pre-staged in distributed cache via mapred local task. Currently, this is not working with vectorization or tez execution engine.
Default:false
hive.skewjoin.key Determine if we get a skew key in join. If we see more than the specified number of rows with the same key in join operator, we think the key as a skew join key.
Default:100000
hive.skewjoin.mapjoin.map.tasks Determine the number of map task used in the follow up map join job for a skew join. It should be used together with hive.skewjoin.mapjoin.min.split to perform a fine grained control.
Default:10000
hive.skewjoin.mapjoin.min.split Determine the number of map task at most used in the follow up map join job for a skew join by specifying the minimum split size. It should be used together with hive.skewjoin.mapjoin.map.tasks to perform a fine grained control.
Default:33554432
hive.heartbeat.interval Send a heartbeat after this interval – used by mapjoin and filter operators
Default:1000
hive.limit.row.max.size When trying a smaller subset of data for simple LIMIT, how much size we need to guarantee each row to have at least.
Default:100000
hive.limit.optimize.limit.file When trying a smaller subset of data for simple LIMIT, maximum number of files we can sample.
Default:10
hive.limit.optimize.enable Whether to enable to optimization to trying a smaller subset of data for simple LIMIT first.
Default:false
hive.limit.optimize.fetch.max Maximum number of rows allowed for a smaller subset of data for simple LIMIT, if it is a fetch query. Insert queries are not restricted by this limit.
Default:50000
hive.limit.pushdown.memory.usage The max memory to be used for hash in RS operator for top K selection.
Default:-1.0
hive.limit.query.max.table.partition This controls how many partitions can be scanned for each partitioned table. The default value “-1″ means no limit.
Default:-1
hive.hashtable.key.count.adjustment Adjustment to mapjoin hashtable size derived from table and column statistics; the estimate of the number of keys is divided by this value. If the value is 0, statistics are not usedand hive.hashtable.initialCapacity is used instead.
Default:1.0
hive.hashtable.initialCapacity Initial capacity of mapjoin hashtable if statistics are absent, or if hive.hashtable.stats.key.estimate.adjustment is set to 0
Default:100000
hive.hashtable.loadfactor
Default:0.75
hive.mapjoin.followby.gby.localtask.max.memory.usage This number means how much memory the local task can take to hold the key/value into an in-memory hash table when this map join is followed by a group by. If the local task’s memory usage is more than this number, the local task will abort by itself. It means the data of the small table is too large to be held in memory.
Default:0.55
hive.mapjoin.localtask.max.memory.usage This number means how much memory the local task can take to hold the key/value into an in-memory hash table. If the local task’s memory usage is more than this number, the local task will abort by itself. It means the data of the small table is too large to be held in memory.
Default:0.9
hive.mapjoin.check.memory.rows The number means after how many rows processed it needs to check the memory usage
Default:100000
hive.debug.localtask
Default:false
hive.input.format The default input format. Set this to HiveInputFormat if you encounter problems with CombineHiveInputFormat.
Default:org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
hive.tez.input.format The default input format for tez. Tez groups splits in the AM.
Default:org.apache.hadoop.hive.ql.io.HiveInputFormat
hive.tez.container.size By default Tez will spawn containers of the size of a mapper. This can be used to overwrite.
Default:-1
hive.tez.cpu.vcores By default Tez will ask for however many cpus map-reduce is configured to use per container. This can be used to overwrite.
Default:-1
hive.tez.java.opts By default Tez will use the Java options from map tasks. This can be used to overwrite.
Default:
hive.tez.log.level The log level to use for tasks executing as part of the DAG. Used only if hive.tez.java.opts is used to configure Java options.
Default:INFO
hive.enforce.bucketing Whether bucketing is enforced. If true, while inserting into the table, bucketing is enforced.
Default:false
hive.enforce.sorting Whether sorting is enforced. If true, while inserting into the table, sorting is enforced.
Default:false
hive.optimize.bucketingsorting If hive.enforce.bucketing or hive.enforce.sorting is true, don’t create a reducer for enforcing bucketing/sorting for queries of the form: insert overwrite table T2 select * from T1; where T1 and T2 are bucketed/sorted by the same keys into the same number of buckets.
Default:true
hive.mapred.partitioner
Default:org.apache.hadoop.hive.ql.io.DefaultHivePartitioner
hive.enforce.sortmergebucketmapjoin If the user asked for sort-merge bucketed map-side join, and it cannot be performed, should the query fail or not ?
Default:false
hive.enforce.bucketmapjoin If the user asked for bucketed map-side join, and it cannot be performed, should the query fail or not ? For example, if the buckets in the tables being joined are not a multiple of each other, bucketed map-side join cannot be performed, and the query will fail if hive.enforce.bucketmapjoin is set to true.
Default:false
hive.auto.convert.sortmerge.join Will the join be automatically converted to a sort-merge join, if the joined tables pass the criteria for sort-merge join.
Default:false
hive.auto.convert.sortmerge.join.bigtable.selection.policy The policy to choose the big table for automatic conversion to sort-merge join. By default, the table with the largest partitions is assigned the big table. All policies are: . based on position of the table – the leftmost table is selected org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSMJ. . based on total size (all the partitions selected in the query) of the table org.apache.hadoop.hive.ql.optimizer.TableSizeBasedBigTableSelectorForAutoSMJ. . based on average size (all the partitions selected in the query) of the table org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ. New policies can be added in future.
Default:org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ
hive.auto.convert.sortmerge.join.to.mapjoin If hive.auto.convert.sortmerge.join is set to true, and a join was converted to a sort-merge join, this parameter decides whether each table should be tried as a big table, and effectively a map-join should be tried. That would create a conditional task with n+1 children for a n-way join (1 child for each table as the big table), and the backup task will be the sort-merge join. In some cases, a map-join would be faster than a sort-merge join, if there is no advantage of having the output bucketed and sorted. For example, if a very big sorted and bucketed table with few files (say 10 files) are being joined with a very small sorter and bucketed table with few files (10 files), the sort-merge join will only use 10 mappers, and a simple map-only join might be faster if the complete small table can fit in memory, and a map-join can be performed.
Default:false
hive.exec.script.trust
Default:false
hive.exec.rowoffset Whether to provide the row offset virtual column
Default:false
hive.hadoop.supports.splittable.combineinputformat
Default:false
hive.optimize.index.filter Whether to enable automatic use of indexes
Default:false
hive.optimize.index.autoupdate Whether to update stale indexes automatically
Default:false
hive.optimize.ppd Whether to enable predicate pushdown
Default:true
hive.ppd.recognizetransivity Whether to transitively replicate predicate filters over equijoin conditions.
Default:true
hive.ppd.remove.duplicatefilters Whether to push predicates down into storage handlers. Ignored when hive.optimize.ppd is false.
Default:true
hive.optimize.constant.propagation Whether to enable constant propagation optimizer
Default:true
hive.optimize.metadataonly
Default:true
hive.optimize.null.scan Dont scan relations which are guaranteed to not generate any rows
Default:true
hive.optimize.ppd.storage Whether to push predicates down to storage handlers
Default:true
hive.optimize.groupby Whether to enable the bucketed group by from bucketed partitions/tables.
Default:true
hive.optimize.bucketmapjoin Whether to try bucket mapjoin
Default:false
hive.optimize.bucketmapjoin.sortedmerge Whether to try sorted bucket merge map join
Default:false
hive.optimize.reducededuplication Remove extra map-reduce jobs if the data is already clustered by the same key which needs to be used again. This should always be set to true. Since it is a new feature, it has been made configurable.
Default:true
hive.optimize.reducededuplication.min.reducer Reduce deduplication merges two RSs by moving key/parts/reducer-num of the child RS to parent RS. That means if reducer-num of the child RS is fixed (order by or forced bucketing) and small, it can make very slow, single MR. The optimization will be automatically disabled if number of reducers would be less than specified value.
Default:4
hive.optimize.sort.dynamic.partition When enabled dynamic partitioning column will be globally sorted. This way we can keep only one record writer open for each partition value in the reducer thereby reducing the memory pressure on reducers.
Default:false
hive.optimize.sampling.orderby Uses sampling on order-by clause for parallel execution.
Default:false
hive.optimize.sampling.orderby.number Total number of samples to be obtained.
Default:1000
hive.optimize.sampling.orderby.percent Expects value between 0.0f and 1.0f. Probability with which a row will be chosen.
Default:0.1
hive.optimize.union.remove Whether to remove the union and push the operators between union and the filesink above union. This avoids an extra scan of the output by union. This is independently useful for union queries, and specially useful when hive.optimize.skewjoin.compiletime is set to true, since an extra union is inserted. The merge is triggered if either of hive.merge.mapfiles or hive.merge.mapredfiles is set to true. If the user has set hive.merge.mapfiles to true and hive.merge.mapredfiles to false, the idea was the number of reducers are few, so the number of files anyway are small. However, with this optimization, we are increasing the number of files possibly by a big margin. So, we merge aggressively.
Default:false
hive.optimize.correlation exploit intra-query correlations.
Default:false
hive.mapred.supports.subdirectories Whether the version of Hadoop which is running supports sub-directories for tables/partitions. Many Hive optimizations can be applied if the Hadoop version supports sub-directories for tables/partitions. It was added by MAPREDUCE-1501
Default:false
hive.optimize.skewjoin.compiletime Whether to create a separate plan for skewed keys for the tables in the join. This is based on the skewed keys stored in the metadata. At compile time, the plan is broken into different joins: one for the skewed keys, and the other for the remaining keys. And then, a union is performed for the 2 joins generated above. So unless the same skewed key is present in both the joined tables, the join for the skewed key will be performed as a map-side join. The main difference between this parameter and hive.optimize.skewjoin is that this parameter uses the skew information stored in the metastore to optimize the plan at compile time itself. If there is no skew information in the metadata, this parameter will not have any affect. Both hive.optimize.skewjoin.compiletime and hive.optimize.skewjoin should be set to true. Ideally, hive.optimize.skewjoin should be renamed as hive.optimize.skewjoin.runtime, but not doing so for backward compatibility. If the skew information is correctly stored in the metadata, hive.optimize.skewjoin.compiletime would change the query plan to take care of it, and hive.optimize.skewjoin will be a no-op.
Default:false
hive.optimize.index.filter.compact.minsize Minimum size (in bytes) of the inputs on which a compact index is automatically used.
Default:5368709120
hive.optimize.index.filter.compact.maxsize Maximum size (in bytes) of the inputs on which a compact index is automatically used. A negative number is equivalent to infinity.
Default:-1
hive.index.compact.query.max.entries The maximum number of index entries to read during a query that uses the compact index. Negative value is equivalent to infinity.
Default:10000000
hive.index.compact.query.max.size The maximum number of bytes that a query using the compact index can read. Negative value is equivalent to infinity.
Default:10737418240
hive.index.compact.binary.search Whether or not to use a binary search to find the entries in an index table that match the filter, where possible
Default:true
hive.stats.autogather A flag to gather statistics automatically during the INSERT OVERWRITE command.
Default:true
hive.stats.dbclass Expects one of the pattern in [jdbc(:.*), hbase, counter, custom, fs]. The storage that stores temporary Hive statistics. In filesystem based statistics collection (‘fs’), each task writes statistics it has collected in a file on the filesystem, which will be aggregated after the job has finished. Supported values are fs (filesystem), jdbc:database (where database can be derby, mysql, etc.), hbase, counter, and custom as defined in StatsSetupConst.java.
Default:fs
hive.stats.jdbcdriver The JDBC driver for the database that stores temporary Hive statistics.
Default:org.apache.derby.jdbc.EmbeddedDriver
hive.stats.dbconnectionstring The default connection string for the database that stores temporary Hive statistics.
Default:jdbc:derby:;databaseName=TempStatsStore;create=true
hive.stats.default.publisher The Java class (implementing the StatsPublisher interface) that is used by default if hive.stats.dbclass is custom type.
Default:
hive.stats.default.aggregator The Java class (implementing the StatsAggregator interface) that is used by default if hive.stats.dbclass is custom type.
Default:
hive.stats.jdbc.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout value used by JDBC connection and statements.
Default:30s
hive.stats.atomic whether to update metastore stats only if all stats are available
Default:false
hive.stats.retries.max Maximum number of retries when stats publisher/aggregator got an exception updating intermediate database. Default is no tries on failures.
Default:0
hive.stats.retries.wait Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The base waiting window before the next retry. The actual wait time is calculated by baseWindow * failures baseWindow * (failure + 1) * (random number between [0.0,1.0]).
Default:3000ms
hive.stats.collect.rawdatasize should the raw data size be collected when analyzing tables
Default:true
hive.client.stats.counters Subset of counters that should be of interest for hive.client.stats.publishers (when one wants to limit their publishing). Non-display names should be used
Default:
hive.stats.reliable Whether queries will fail because stats cannot be collected completely accurately. If this is set to true, reading/writing from/into a partition may fail because the stats could not be computed accurately.
Default:false
hive.analyze.stmt.collect.partlevel.stats analyze table T compute statistics for columns. Queries like these should compute partitionlevel stats for partitioned table even when no part spec is specified.
Default:true
hive.stats.gather.num.threads Number of threads used by partialscan/noscan analyze command for partitioned tables. This is applicable only for file formats that implement StatsProvidingRecordReader (like ORC).
Default:10
hive.stats.collect.tablekeys Whether join and group by keys on tables are derived and maintained in the QueryPlan. This is useful to identify how tables are accessed and to determine if they should be bucketed.
Default:false
hive.stats.collect.scancols Whether column accesses are tracked in the QueryPlan. This is useful to identify how tables are accessed and to determine if there are wasted columns that can be trimmed.
Default:false
hive.stats.ndv.error Standard error expressed in percentage. Provides a tradeoff between accuracy and compute cost. A lower value for error indicates higher accuracy and a higher compute cost.
Default:20.0
hive.stats.key.prefix.max.length Determines if when the prefix of the key used for intermediate stats collection exceeds a certain length, a hash of the key is used instead. If the value < 0 then hashing
Default:150
hive.stats.key.prefix.reserve.length Reserved length for postfix of stats key. Currently only meaningful for counter type which should keep length of full stats key smaller than max length configured by hive.stats.key.prefix.max.length. For counter type, it should be bigger than the length of LB spec if exists.
Default:24
hive.stats.max.variable.length To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics, for variable length columns (like string, bytes etc.), this value will be used. For fixed length columns their corresponding Java equivalent sizes are used (float – 4 bytes, double – 8 bytes etc.).
Default:100
hive.stats.list.num.entries To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics and for variable length complex columns like list, the average number of entries/values can be specified using this config.
Default:10
hive.stats.map.num.entries To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics and for variable length complex columns like map, the average number of entries/values can be specified using this config.
Default:10
hive.stats.fetch.partition.stats Annotation of operator tree with statistics information requires partition level basic statistics like number of rows, data size and file size. Partition statistics are fetched from metastore. Fetching partition statistics for each needed partition can be expensive when the number of partitions is high. This flag can be used to disable fetching of partition statistics from metastore. When this flag is disabled, Hive will make calls to filesystem to get file sizes and will estimate the number of rows from row schema.
Default:true
hive.stats.fetch.column.stats Annotation of operator tree with statistics information requires column statistics. Column statistics are fetched from metastore. Fetching column statistics for each needed column can be expensive when the number of columns is high. This flag can be used to disable fetching of column statistics from metastore.
Default:false
hive.stats.join.factor Hive/Tez optimizer estimates the data size flowing through each of the operators. JOIN operator uses column statistics to estimate the number of rows flowing out of it and hence the data size. In the absence of column statistics, this factor determines the amount of rows that flows out of JOIN operator.
Default:1.1
hive.stats.deserialization.factor Hive/Tez optimizer estimates the data size flowing through each of the operators. In the absence of basic statistics like number of rows and data size, file size is used to estimate the number of rows and data size. Since files in tables/partitions are serialized (and optionally compressed) the estimates of number of rows and data size cannot be reliably determined. This factor is multiplied with the file size to account for serialization and compression.
Default:1.0
hive.support.concurrency Whether Hive supports concurrency control or not. A ZooKeeper instance must be up and running when using zookeeper Hive lock manager
Default:false
hive.lock.manager
Default:org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
hive.lock.numretries The number of times you want to try to get all the locks
Default:100
hive.unlock.numretries The number of times you want to retry to do one unlock
Default:10
hive.lock.sleep.between.retries Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. The sleep time between various retries
Default:60s
hive.lock.mapred.only.operation This param is to control whether or not only do lock on queries that need to execute at least one mapred job.
Default:false
hive.zookeeper.quorum List of ZooKeeper servers to talk to. This is needed for: 1. Read/write locks – when hive.lock.manager is set to org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager, 2. When HiveServer2 supports service discovery via Zookeeper. 3. For delegation token storage if zookeeper store is used, if hive.cluster.delegation.token.store.zookeeper.connectString is not set
Default:
hive.zookeeper.client.port The port of ZooKeeper servers to talk to. If the list of Zookeeper servers specified in hive.zookeeper.quorum does not contain port numbers, this value is used.
Default:2181
hive.zookeeper.session.timeout ZooKeeper client’s session timeout. The client is disconnected, and as a result, all locks released, if a heartbeat is not sent in the timeout.
Default:600000
hive.zookeeper.namespace The parent node under which all ZooKeeper nodes are created.
Default:hive_zookeeper_namespace
hive.zookeeper.clean.extra.nodes Clean extra nodes at the end of the session.
Default:false
hive.txn.manager Set to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager as part of turning on Hive transactions, which also requires appropriate settings for hive.compactor.initiator.on, hive.compactor.worker.threads, hive.support.concurrency (true), hive.enforce.bucketing (true), and hive.exec.dynamic.partition.mode (nonstrict). The default DummyTxnManager replicates pre-Hive-0.13 behavior and provides no transactions.
Default:org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
hive.txn.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. time after which transactions are declared aborted if the client has not sent a heartbeat.
Default:300s
hive.txn.max.open.batch Maximum number of transactions that can be fetched in one call to open_txns(). This controls how many transactions streaming agents such as Flume or Storm open simultaneously. The streaming agent then writes that number of entries into a single file (per Flume agent or Storm bolt). Thus increasing this value decreases the number of delta files created by streaming agents. But it also increases the number of open transactions that Hive has to track at any given time, which may negatively affect read performance.
Default:1000
hive.compactor.initiator.on Whether to run the initiator and cleaner threads on this metastore instance or not. Set this to true on one instance of the Thrift metastore service as part of turning on Hive transactions. For a complete list of parameters required for turning on transactions, see hive.txn.manager.
Default:false
hive.compactor.worker.threads How many compactor worker threads to run on this metastore instance. Set this to a positive number on one or more instances of the Thrift metastore service as part of turning on Hive transactions. For a complete list of parameters required for turning on transactions, see hive.txn.manager. Worker threads spawn MapReduce jobs to do compactions. They do not do the compactions themselves. Increasing the number of worker threads will decrease the time it takes tables or partitions to be compacted once they are determined to need compaction. It will also increase the background load on the Hadoop cluster as more MapReduce jobs will be running in the background.
Default:0
hive.compactor.worker.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time in seconds after which a compaction job will be declared failed and the compaction re-queued.
Default:86400s
hive.compactor.check.interval Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time in seconds between checks to see if any tables or partitions need to be compacted. This should be kept high because each check for compaction requires many calls against the NameNode. Decreasing this value will reduce the time it takes for compaction to be started for a table or partition that requires compaction. However, checking if compaction is needed requires several calls to the NameNode for each table or partition that has had a transaction done on it since the last major compaction. So decreasing this value will increase the load on the NameNode.
Default:300s
hive.compactor.delta.num.threshold Number of delta directories in a table or partition that will trigger a minor compaction.
Default:10
hive.compactor.delta.pct.threshold Percentage (fractional) size of the delta files relative to the base that will trigger a major compaction. (1.0 = 100%, so the default 0.1 = 10%.)
Default:0.1
hive.compactor.abortedtxn.threshold Number of aborted transactions involving a given table or partition that will trigger a major compaction.
Default:1000
hive.compactor.cleaner.run.interval Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time between runs of the cleaner thread
Default:5000ms
hive.hbase.wal.enabled Whether writes to HBase should be forced to the write-ahead log. Disabling this improves HBase write performance at the risk of lost writes in case of a crash.
Default:true
hive.hbase.generatehfiles True when HBaseStorageHandler should generate hfiles instead of operate against the online table.
Default:false
hive.hbase.snapshot.name The HBase table snapshot name to use.
Default:
hive.hbase.snapshot.restoredir The directory in which to restore the HBase table snapshot.
Default:/tmp
hive.archive.enabled Whether archiving operations are permitted
Default:false
hive.optimize.index.groupby Whether to enable optimization of group-by queries using Aggregate indexes.
Default:false
hive.outerjoin.supports.filters
Default:true
hive.fetch.task.conversion Expects one of [none, minimal, more]. Some select queries can be converted to single FETCH task minimizing latency. Currently the query should be single sourced not having any subquery and should not have any aggregations or distincts (which incurs RS), lateral views and joins. 0. none : disable hive.fetch.task.conversion 1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only 2. more : SELECT, FILTER, LIMIT only (support TABLESAMPLE and virtual columns)
Default:more
hive.fetch.task.conversion.threshold Input threshold for applying hive.fetch.task.conversion. If target table is native, input length is calculated by summation of file lengths. If it’s not native, storage handler for the table can optionally implement org.apache.hadoop.hive.ql.metadata.InputEstimator interface.
Default:1073741824
hive.fetch.task.aggr Aggregation queries with no group-by clause (for example, select count(*) from src) execute final aggregations in single reduce task. If this is set true, Hive delegates final aggregation stage to fetch task, possibly decreasing the query time.
Default:false
hive.compute.query.using.stats When set to true Hive will answer a few queries like count(1) purely using stats stored in metastore. For basic stats collection turn on the config hive.stats.autogather to true. For more advanced stats collection need to run analyze table queries.
Default:false
hive.fetch.output.serde The SerDe used by FetchTask to serialize the fetch output.
Default:org.apache.hadoop.hive.serde2.DelimitedJSONSerDe
hive.cache.expr.evaluation If true, the evaluation result of a deterministic expression referenced twice or more will be cached. For example, in a filter condition like ‘.. where key + 10 = 100 or key + 10 = 0′ the expression ‘key + 10′ will be evaluated/cached once and reused for the following expression (‘key + 10 = 0′). Currently, this is applied only to expressions in select or filter operators.
Default:true
hive.variable.substitute This enables substitution using syntax like ${var} ${system:var} and ${env:var}.
Default:true
hive.variable.substitute.depth The maximum replacements the substitution engine will do.
Default:40
hive.conf.validation Enables type checking for registered Hive configurations
Default:true
hive.semantic.analyzer.hook
Default:
hive.security.authorization.enabled enable or disable the Hive client authorization
Default:false
hive.security.authorization.manager The Hive client authorization manager class name. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider.
Default:org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider
hive.security.authenticator.manager hive client authenticator manager class name. The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.
Default:org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator
hive.security.metastore.authorization.manager Names of authorization manager classes (comma separated) to be used in the metastore for authorization. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider. All authorization manager classes have to successfully authorize the metastore API call for the command execution to be allowed.
Default:org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider
hive.security.metastore.authorization.auth.reads If this is true, metastore authorizer authorizes read actions on database, table
Default:true
hive.security.metastore.authenticator.manager authenticator manager class name to be used in the metastore for authentication. The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.
Default:org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator
hive.security.authorization.createtable.user.grants the privileges automatically granted to some users whenever a table gets created. An example like “userX,userY:select;userZ:create” will grant select privilege to userX and userY, and grant create privilege to userZ whenever a new table created.
Default:
hive.security.authorization.createtable.group.grants the privileges automatically granted to some groups whenever a table gets created. An example like “groupX,groupY:select;groupZ:create” will grant select privilege to groupX and groupY, and grant create privilege to groupZ whenever a new table created.
Default:
hive.security.authorization.createtable.role.grants the privileges automatically granted to some roles whenever a table gets created. An example like “roleX,roleY:select;roleZ:create” will grant select privilege to roleX and roleY, and grant create privilege to roleZ whenever a new table created.
Default:
hive.security.authorization.createtable.owner.grants The privileges automatically granted to the owner whenever a table gets created. An example like “select,drop” will grant select and drop privilege to the owner of the table. Note that the default gives the creator of a table no access to the table (but see HIVE-8067).
Default:
hive.security.authorization.sqlstd.confwhitelist List of comma separated Java regexes. Configurations parameters that match these regexes can be modified by user when SQL standard authorization is enabled. To get the default value, use the ‘set <param>’ command. Note that the hive.conf.restricted.list checks are still enforced after the white list check
Default:
hive.security.authorization.sqlstd.confwhitelist.append List of comma separated Java regexes, to be appended to list set in hive.security.authorization.sqlstd.confwhitelist. Using this list instead of updating the original list means that you can append to the defaults set by SQL standard authorization instead of replacing it entirely.
Default:
hive.cli.print.header Whether to print the names of the columns in query output.
Default:false
hive.error.on.empty.partition Whether to throw an exception if dynamic partition insert generates empty results.
Default:false
hive.index.compact.file internal variable
Default:
hive.index.blockfilter.file internal variable
Default:
hive.index.compact.file.ignore.hdfs When true the HDFS location stored in the index file will be ignored at runtime. If the data got moved or the name of the cluster got changed, the index data should still be usable.
Default:false
hive.exim.uri.scheme.whitelist A comma separated list of acceptable URI schemes for import and export.
Default:hdfs,pfile
hive.mapper.cannot.span.multiple.partitions
Default:false
hive.rework.mapredwork should rework the mapred work or not. This is first introduced by SymlinkTextInputFormat to replace symlink files with real paths at compile time.
Default:false
hive.exec.concatenate.check.index If this is set to true, Hive will throw error when doing ‘alter table tbl_name [partSpec] concatenate’ on a table/partition that has indexes on it. The reason the user want to set this to true is because it can help user to avoid handling all index drop, recreation, rebuild work. This is very helpful for tables with thousands of partitions.
Default:true
hive.io.exception.handlers A list of io exception handler class names. This is used to construct a list exception handlers to handle exceptions thrown by record readers
Default:
hive.server2.logging.operation.enabled When true, HS2 will save operation logs and make them available for clients
Default:true
hive.server2.logging.operation.log.location Top level directory where operation logs are stored if logging functionality is enabled
Default:${system:java.io.tmpdir}/${system:user.name}/operation_logs
hive.server2.logging.operation.verbose When true, HS2 operation logs available for clients will be verbose
Default:false
hive.log4j.file Hive log4j configuration file. If the property is not set, then logging will be initialized using hive-log4j.properties found on the classpath. If the property is set, the value must be a valid URI (java.net.URI, e.g. “file:///tmp/my-logging.properties”), which you can then extract a URL from and pass to PropertyConfigurator.configure(URL).
Default:
hive.exec.log4j.file Hive log4j configuration file for execution mode(sub command). If the property is not set, then logging will be initialized using hive-exec-log4j.properties found on the classpath. If the property is set, the value must be a valid URI (java.net.URI, e.g. “file:///tmp/my-logging.properties”), which you can then extract a URL from and pass to PropertyConfigurator.configure(URL).
Default:
hive.autogen.columnalias.prefix.label String used as a prefix when auto generating column alias. By default the prefix label will be appended with a column position number to form the column alias. Auto generation would happen if an aggregate function is used in a select clause without an explicit alias.
Default:_c
hive.autogen.columnalias.prefix.includefuncname Whether to include function name in the column alias auto generated by Hive.
Default:false
hive.exec.perf.logger The class responsible for logging client side performance metrics. Must be a subclass of org.apache.hadoop.hive.ql.log.PerfLogger
Default:org.apache.hadoop.hive.ql.log.PerfLogger
hive.start.cleanup.scratchdir To cleanup the Hive scratchdir when starting the Hive Server
Default:false
hive.insert.into.multilevel.dirs Where to insert into multilevel directories like “insert directory ‘/HIVEFT25686/chinna/’ from table”
Default:false
hive.warehouse.subdir.inherit.perms Set this to true if the the table directories should inherit the permission of the warehouse or database directory instead of being created with the permissions derived from dfs umask
Default:false
hive.insert.into.external.tables whether insert into external tables is allowed
Default:true
hive.exec.driver.run.hooks A comma separated list of hooks which implement HiveDriverRunHook. Will be run at the beginning and end of Driver.run, these will be run in the order specified.
Default:
hive.ddl.output.format The data format to use for DDL output. One of “text” (for human readable text) or “json” (for a json object).
Default:
hive.entity.separator Separator used to construct names of tables and partitions. For example, dbname@tablename@partitionname
Default:@
hive.display.partition.cols.separately In older Hive version (0.10 and earlier) no distinction was made between partition columns or non-partition columns while displaying columns in describe table. From 0.12 onwards, they are displayed separately. This flag will let you get old behavior, if desired. See, test-case in patch for HIVE-6689.
Default:true
hive.ssl.protocol.blacklist SSL Versions to disable for all Hive Servers
Default:SSLv2,SSLv2Hello,SSLv3
hive.server2.max.start.attempts Expects value bigger than 0. Number of times HiveServer2 will attempt to start before exiting, sleeping 60 seconds between retries. The default of 30 will keep trying for 30 minutes.
Default:30
hive.server2.support.dynamic.service.discovery Whether HiveServer2 supports dynamic service discovery for its clients. To support this, each instance of HiveServer2 currently uses ZooKeeper to register itself, when it is brought up. JDBC/ODBC clients should use the ZooKeeper ensemble: hive.zookeeper.quorum in their connection string.
Default:false
hive.server2.zookeeper.namespace The parent node in ZooKeeper used by HiveServer2 when supporting dynamic service discovery.
Default:hiveserver2
hive.server2.global.init.file.location Either the location of a HS2 global init file or a directory containing a .hiverc file. If the property is set, the value must be a valid path to an init file or directory where the init file is located.
Default:${env:HIVE_CONF_DIR}
hive.server2.transport.mode Expects one of [binary, http]. Transport mode of HiveServer2.
Default:binary
hive.server2.thrift.bind.host Bind host on which to run the HiveServer2 Thrift service.
Default:
hive.server2.thrift.http.port Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is ‘http’.
Default:10001
hive.server2.thrift.http.path Path component of URL endpoint when in HTTP mode.
Default:cliservice
hive.server2.thrift.http.min.worker.threads Minimum number of worker threads when in HTTP mode.
Default:5
hive.server2.thrift.http.max.worker.threads Maximum number of worker threads when in HTTP mode.
Default:500
hive.server2.thrift.http.max.idle.time Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Maximum idle time for a connection on the server when in HTTP mode.
Default:1800s
hive.server2.thrift.http.worker.keepalive.time Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Keepalive time for an idle http worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval.
Default:60s
hive.server2.thrift.port Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is ‘binary’.
Default:10000
hive.server2.thrift.sasl.qop Expects one of [auth, auth-int, auth-conf]. Sasl QOP value; set it to one of following values to enable higher levels of protection for HiveServer2 communication with clients. Setting hadoop.rpc.protection to a higher level than HiveServer2 does not make sense in most situations. HiveServer2 ignores hadoop.rpc.protection in favor of hive.server2.thrift.sasl.qop. “auth” – authentication only (default) “auth-int” – authentication plus integrity protection “auth-conf” – authentication plus integrity and confidentiality protection This is applicable only if HiveServer2 is configured to use Kerberos authentication.
Default:auth
hive.server2.thrift.min.worker.threads Minimum number of Thrift worker threads
Default:5
hive.server2.thrift.max.worker.threads Maximum number of Thrift worker threads
Default:500
hive.server2.thrift.worker.keepalive.time Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Keepalive time (in seconds) for an idle worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval.
Default:60s
hive.server2.async.exec.threads Number of threads in the async thread pool for HiveServer2
Default:100
hive.server2.async.exec.shutdown.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. How long HiveServer2 shutdown will wait for async threads to terminate.
Default:10s
hive.server2.async.exec.wait.queue.size Size of the wait queue for async thread pool in HiveServer2. After hitting this limit, the async thread pool will reject new requests.
Default:100
hive.server2.async.exec.keepalive.time Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Time that an idle HiveServer2 async thread (from the thread pool) will wait for a new task to arrive before terminating
Default:10s
hive.server2.long.polling.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time that HiveServer2 will wait before responding to asynchronous calls that use long polling
Default:5000ms
hive.server2.authentication Expects one of [nosasl, none, ldap, kerberos, pam, custom]. Client authentication types. NONE: no authentication check LDAP: LDAP/AD based authentication KERBEROS: Kerberos/GSSAPI authentication CUSTOM: Custom authentication provider (Use with property hive.server2.custom.authentication.class) PAM: Pluggable authentication module NOSASL: Raw transport
Default:NONE
hive.server2.allow.user.substitution Allow alternate user to be specified as part of HiveServer2 open connection request.
Default:true
hive.server2.authentication.kerberos.keytab Kerberos keytab file for server principal
Default:
hive.server2.authentication.kerberos.principal Kerberos server principal
Default:
hive.server2.authentication.spnego.keytab keytab file for SPNego principal, optional, typical value would look like /etc/security/keytabs/spnego.service.keytab, This keytab would be used by HiveServer2 when Kerberos security is enabled and HTTP transport mode is used. This needs to be set only if SPNEGO is to be used in authentication. SPNego authentication would be honored only if valid hive.server2.authentication.spnego.principal and hive.server2.authentication.spnego.keytab are specified.
Default:
hive.server2.authentication.spnego.principal SPNego service principal, optional, typical value would look like HTTP/_HOST@EXAMPLE.COM SPNego service principal would be used by HiveServer2 when Kerberos security is enabled and HTTP transport mode is used. This needs to be set only if SPNEGO is to be used in authentication.
Default:
hive.server2.authentication.ldap.url LDAP connection URL
Default:
hive.server2.authentication.ldap.baseDN LDAP base DN
Default:
hive.server2.authentication.ldap.Domain
Default:
hive.server2.custom.authentication.class Custom authentication class. Used when property ‘hive.server2.authentication’ is set to ‘CUSTOM’. Provided class must be a proper implementation of the interface org.apache.hive.service.auth.PasswdAuthenticationProvider. HiveServer2 will call its Authenticate(user, passed) method to authenticate requests. The implementation may optionally implement Hadoop’s org.apache.hadoop.conf.Configurable class to grab Hive’s Configuration object.
Default:
hive.server2.authentication.pam.services List of the underlying pam services that should be used when auth type is PAM A file with the same name must exist in /etc/pam.d
Default:
hive.server2.enable.doAs Setting this property to true will have HiveServer2 execute Hive operations as the user making the calls to it.
Default:true
hive.server2.table.type.mapping Expects one of [classic, hive]. This setting reflects how HiveServer2 will report the table types for JDBC and other client implementations that retrieve the available tables and supported table types HIVE : Exposes Hive’s native table types like MANAGED_TABLE, EXTERNAL_TABLE, VIRTUAL_VIEW CLASSIC : More generic types like TABLE and VIEW
Default:CLASSIC
hive.server2.session.hook
Default:
hive.server2.use.SSL Set this to true for using SSL encryption in HiveServer2.
Default:false
hive.server2.keystore.path SSL certificate keystore location.
Default:
hive.server2.keystore.password SSL certificate keystore password.
Default:
hive.security.command.whitelist Comma separated list of non-SQL Hive commands users are authorized to execute
Default:set,reset,dfs,add,list,delete,reload,compile
hive.server2.session.check.interval Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. The time should be bigger than or equal to 3000 msec. The check interval for session/operation timeout, which can be disabled by setting to zero or negative value.
Default:0ms
hive.server2.idle.session.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Session will be closed when it’s not accessed for this duration, which can be disabled by setting to zero or negative value.
Default:0ms
hive.server2.idle.operation.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Operation will be closed when it’s not accessed for this duration of time, which can be disabled by setting to zero value. With positive value, it’s checked for operations in terminal state only (FINISHED, CANCELED, CLOSED, ERROR). With negative value, it’s checked for all of the operations regardless of state.
Default:0ms
hive.conf.restricted.list Comma separated list of configuration options which are immutable at runtime
Default:hive.security.authenticator.manager, hive.security.authorization.manager, hive.users.in.admin.role
hive.multi.insert.move.tasks.share.dependencies If this is set all move tasks for tables/partitions (not directories) at the end of a multi-insert query will only begin once the dependencies for all these move tasks have been met. Advantages: If concurrency is enabled, the locks will only be released once the query has finished, so with this config enabled, the time when the table/partition is generated will be much closer to when the lock on it is released. Disadvantages: If concurrency is not enabled, with this disabled, the tables/partitions which are produced by this query and finish earlier will be available for querying much earlier. Since the locks are only released once the query finishes, this does not apply if concurrency is enabled.
Default:false
hive.exec.infer.bucket.sort If this is set, when writing partitions, the metadata will include the bucketing/sorting properties with which the data was written if any (this will not overwrite the metadata inherited from the table if the table is bucketed/sorted)
Default:false
hive.exec.infer.bucket.sort.num.buckets.power.two If this is set, when setting the number of reducers for the map reduce task which writes the final output files, it will choose a number which is a power of two, unless the user specifies the number of reducers to use using mapred.reduce.tasks. The number of reducers may be set to a power of two, only to be followed by a merge task meaning preventing anything from being inferred. With hive.exec.infer.bucket.sort set to true: Advantages: If this is not set, the number of buckets for partitions will seem arbitrary, which means that the number of mappers used for optimized joins, for example, will be very low. With this set, since the number of buckets used for any partition is a power of two, the number of mappers used for optimized joins will be the least number of buckets used by any partition being joined. Disadvantages: This may mean a much larger or much smaller number of reducers being used in the final map reduce job, e.g. if a job was originally going to take 257 reducers, it will now take 512 reducers, similarly if the max number of reducers is 511, and a job was going to use this many, it will now use 256 reducers.
Default:false
hive.optimize.listbucketing Enable list bucketing optimizer. Default value is false so that we disable it by default.
Default:false
hive.server.read.socket.timeout Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified. Timeout for the HiveServer to close the connection if no response from the client. By default, 10 seconds.
Default:10s
hive.server.tcp.keepalive Whether to enable TCP keepalive for the Hive Server. Keepalive will prevent accumulation of half-open connections.
Default:true
hive.decode.partition.name Whether to show the unquoted partition names in query results.
Default:false
hive.execution.engine Expects one of [mr, tez]. Chooses execution engine. Options are: mr (Map reduce, default) or tez (hadoop 2 only)
Default:mr
hive.jar.directory This is the location hive in tez mode will look for to find a site wide installed hive instance.
Default:
hive.user.install.directory If hive (in tez mode only) cannot find a usable hive jar in “hive.jar.directory”, it will upload the hive jar to “hive.user.install.directory/user.name” and use it to run queries.
Default:hdfs:///user/
hive.vectorized.execution.enabled This flag should be set to true to enable vectorized mode of query execution. The default value is false.
Default:false
hive.vectorized.execution.reduce.enabled This flag should be set to true to enable vectorized mode of the reduce-side of query execution. The default value is true.
Default:true
hive.vectorized.execution.reduce.groupby.enabled This flag should be set to true to enable vectorized mode of the reduce-side GROUP BY query execution. The default value is true.
Default:true
hive.vectorized.groupby.checkinterval Number of entries added to the group by aggregation hash before a recomputation of average entry size is performed.
Default:100000
hive.vectorized.groupby.maxentries Max number of entries in the vector group by aggregation hashtables. Exceeding this will trigger a flush irrelevant of memory pressure condition.
Default:1000000
hive.vectorized.groupby.flush.percent Percent of entries in the group by aggregation hash flushed when the memory threshold is exceeded.
Default:0.1
hive.typecheck.on.insert
Default:true
hive.hadoop.classpath For Windows OS, we need to pass HIVE_HADOOP_CLASSPATH Java parameter while starting HiveServer2 using “-hiveconf hive.hadoop.classpath=%HIVE_LIB%”.
Default:
hive.rpc.query.plan Whether to send the query plan via local resource or RPC
Default:false
hive.compute.splits.in.am Whether to generate the splits locally or in the AM (tez only)
Default:true
hive.prewarm.enabled Enables container prewarm for Tez (Hadoop 2 only)
Default:false
hive.prewarm.numcontainers Controls the number of containers to prewarm for Tez (Hadoop 2 only)
Default:10
hive.stageid.rearrange Expects one of [none, idonly, traverse, execution].
Default:none
hive.explain.dependency.append.tasktype
Default:false
hive.counters.group.name The name of counter group for internal Hive variables (CREATED_FILE, FATAL_ERROR, etc.)
Default:HIVE
hive.server2.tez.default.queues A list of comma separated values corresponding to YARN queues of the same name. When HiveServer2 is launched in Tez mode, this configuration needs to be set for multiple Tez sessions to run in parallel on the cluster.
Default:
hive.server2.tez.sessions.per.default.queue A positive integer that determines the number of Tez sessions that should be launched on each of the queues specified by “hive.server2.tez.default.queues”. Determines the parallelism on each queue.
Default:1
hive.server2.tez.initialize.default.sessions This flag is used in HiveServer2 to enable a user to use HiveServer2 without turning on Tez for HiveServer2. The user could potentially want to run queries over Tez without the pool of sessions.
Default:false
hive.support.quoted.identifiers Expects one of [none, column]. Whether to use quoted identifier. ‘none’ or ‘column’ can be used. none: default(past) behavior. Implies only alphaNumeric and underscore are valid characters in identifiers. column: implies column names can contain any character.
Default:column
hive.users.in.admin.role Comma separated list of users who are in admin role for bootstrapping. More users can be added in ADMIN role later.
Default:
hive.compat Enable (configurable) deprecated behaviors by setting desired level of backward compatibility. Setting to 0.12: Maintains division behavior: int / int = double
Default:0.12
hive.convert.join.bucket.mapjoin.tez Whether joins can be automatically converted to bucket map joins in hive when tez is used as the execution engine.
Default:false
hive.exec.check.crossproducts Check if a plan contains a Cross Product. If there is one, output a warning to the Session’s console.
Default:true
hive.localize.resource.wait.interval Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Time to wait for another thread to localize the same resource for hive-tez.
Default:5000ms
hive.localize.resource.num.wait.attempts The number of attempts waiting for localizing a resource in hive-tez.
Default:5
hive.tez.auto.reducer.parallelism Turn on Tez’ auto reducer parallelism feature. When enabled, Hive will still estimate data sizes and set parallelism estimates. Tez will sample source vertices’ output sizes and adjust the estimates at runtime as necessary.
Default:false
hive.tez.max.partition.factor When auto reducer parallelism is enabled this factor will be used to over-partition data in shuffle edges.
Default:2.0
hive.tez.min.partition.factor When auto reducer parallelism is enabled this factor will be used to put a lower limit to the number of reducers that tez specifies.
Default:0.25
hive.tez.dynamic.partition.pruning When dynamic pruning is enabled, joins on partition keys will be processed by sending events from the processing vertices to the Tez application master. These events will be used to prune unnecessary partitions.
Default:true
hive.tez.dynamic.partition.pruning.max.event.size Maximum size of events sent by processors in dynamic pruning. If this size is crossed no pruning will take place.
Default:1048576
hive.tez.dynamic.partition.pruning.max.data.size Maximum total data size of events in dynamic pruning.
Default:104857600
hive.tez.smb.number.waves The number of waves in which to run the SMB join. Account for cluster being occupied. Ideally should be 1 wave.
Default:0.5
hive.tez.exec.print.summary Display breakdown of execution steps, for every query executed by the shell.
Default:false
hive.tez.exec.inplace.progress Updates tez job execution progress in-place in the terminal.
Default:true

Leave a comment