namevaluedescription
hadoop.tmp.dir/tmp/hadoop-${user.name}A base for other temporary directories.
hadoop.logfile.size10000000The max size of each log file
hadoop.logfile.count10The max number of log files
dfs.namenode.logging.levelinfoThe logging level for dfs namenode. Other values are "dir"(trac e namespace mutations), "block"(trace block under/over replications and block creations/deletions), or "all".
io.sort.factor10The number of streams to merge at once while sorting files. This determines the number of open file handles.
io.sort.mb100The total amount of buffer memory to use while sorting files, in megabytes. By default, gives each merge stream 1MB, which should minimize seeks.
io.file.buffer.size4096The size of buffer for use in sequence files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
io.bytes.per.checksum512The number of bytes per checksum. Must not be larger than io.file.buffer.size.
io.skip.checksum.errorsfalseIf true, when a checksum error is encountered while reading a sequence file, entries are skipped, instead of throwing an exception.
io.map.index.skip0Number of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large map files using less memory.
io.compression.codecsorg.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodecA list of the compression codec classes that can be used for compression/decompression.
fs.default.namefile:///The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.
fs.trash.root${hadoop.tmp.dir}/TrashThe trash directory, used by FsShell's 'rm' command.
fs.trash.interval0Number of minutes between trash checkpoints. If zero, the trash feature is disabled.
fs.file.implorg.apache.hadoop.fs.LocalFileSystemThe FileSystem for file: uris.
fs.hdfs.implorg.apache.hadoop.dfs.DistributedFileSystemThe FileSystem for hdfs: uris.
fs.s3.implorg.apache.hadoop.fs.s3.S3FileSystemThe FileSystem for s3: uris.
fs.ramfs.implorg.apache.hadoop.fs.InMemoryFileSystemThe FileSystem for ramfs: uris.
fs.inmemory.size.mb75The size of the in-memory filsystem instance in MB
fs.checkpoint.dir${hadoop.tmp.dir}/dfs/namesecondaryDetermines where on the local filesystem the DFS secondary name node should store the temporary images and edits to merge.
fs.checkpoint.period3600The number of seconds between two periodic checkpoints.
fs.checkpoint.size67108864The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired.
dfs.secondary.info.port50090The base number for the Secondary namenode info port.
dfs.secondary.info.bindAddress0.0.0.0 The address where the secondary namenode web UI will listen to.
dfs.datanode.bindAddress0.0.0.0 the address where the datanode will listen to.
dfs.datanode.port50010The port number that the dfs datanode server uses as a starting point to look for a free port to listen on.
dfs.info.bindAddress0.0.0.0 the address where the dfs namenode web ui will listen on.
dfs.info.port50070The base port number for the dfs namenode web ui.
dfs.datanode.dns.interfacedefaultThe name of the Network Interface from which a data node should report its IP address.
dfs.datanode.dns.nameserverdefaultThe host name or IP address of the name server (DNS) which a DataNode should use to determine the host name used by the NameNode for communication and display purposes.
dfs.replication.considerLoadtrueDecide if chooseTarget considers the target's load or not
dfs.default.chunk.view.size32768The number of bytes to view for a file on the browser.
dfs.datanode.du.reserved0Reserved space in bytes. Always leave this much space free for non dfs use
dfs.datanode.du.pct0.98fWhen calculating remaining space, only use this percentage of the real available space
dfs.name.dir${hadoop.tmp.dir}/dfs/nameDetermines where on the local filesystem the DFS name node should store the name table. If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.client.buffer.dir${hadoop.tmp.dir}/dfs/tmpDetermines where on the local filesystem an DFS client should store its blocks before it sends them to the datanode.
dfs.data.dir${hadoop.tmp.dir}/dfs/dataDetermines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
dfs.replication3Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
dfs.replication.max512Maximal block replication.
dfs.replication.min1Minimal block replication.
dfs.block.size67108864The default block size for new files.
dfs.df.interval3000Disk usage statistics refresh interval in msec.
dfs.client.block.write.retries3The number of retries for writing blocks to the data nodes, before we signal failure to the application.
dfs.blockreport.intervalMsec3600000Determines block reporting interval in milliseconds.
dfs.heartbeat.interval3Determines datanode heartbeat interval in seconds.
dfs.namenode.handler.count10The number of server threads for the namenode.
dfs.safemode.threshold.pct0.999f Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.replication.min. Values less than or equal to 0 mean not to start in safe mode. Values greater than 1 will make safe mode permanent.
dfs.safemode.extension30000 Determines extension of safe mode in milliseconds after the threshold level is reached.
dfs.network.script Specifies a script name that print the network location path of the current machine.
dfs.hostsNames a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted.
dfs.hosts.excludeNames a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded.
fs.s3.block.size67108864Block size to use when writing files to S3.
fs.s3.buffer.dir${hadoop.tmp.dir}/s3Determines where on the local filesystem the S3 filesystem should store its blocks before it sends them to S3 or after it retrieves them from S3.
fs.s3.maxRetries4The maximum number of retries for reading or writing blocks to S3, before we signal failure to the application.
fs.s3.sleepTimeSeconds10The number of seconds to sleep between each S3 retry.
mapred.job.trackerlocalThe host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
mapred.job.tracker.info.bindAddress0.0.0.0 the address where the job tracker info webserver will be binded on.
mapred.job.tracker.info.port50030The port that the MapReduce job tracker info webserver runs at.
mapred.task.tracker.report.bindAddress0.0.0.0 the address where the maperd tracker report server will be binded on.
mapred.task.tracker.report.port50050The port number that the MapReduce task tracker report server uses as a starting point to look for a free port to listen on.
mapred.local.dir${hadoop.tmp.dir}/mapred/localThe local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored.
local.cache.size10737418240The limit on the size of cache you want to keep, set by default to 10GB. This will act as a soft limit on the cache directory for out of band data.
mapred.system.dir${hadoop.tmp.dir}/mapred/systemThe shared directory where MapReduce stores control files.
mapred.temp.dir${hadoop.tmp.dir}/mapred/tempA shared directory for temporary files.
mapred.local.dir.minspacestart0If the space in mapred.local.dir drops under this, do not ask for more tasks. Value in bytes.
mapred.local.dir.minspacekill0If the space in mapred.local.dir drops under this, do not ask more tasks until all the current ones have finished and cleaned up. Also, to save the rest of the tasks we have running, kill one of them, to clean up some space. Start with the reduce tasks, then go with the ones that have finished the least. Value in bytes.
mapred.tasktracker.expiry.interval600000Expert: The time-interval, in miliseconds, after which a tasktracker is declared 'lost' if it doesn't send heartbeats.
mapred.map.tasks2The default number of map tasks per job. Typically set to a prime several times greater than number of available hosts. Ignored when mapred.job.tracker is "local".
mapred.reduce.tasks1The default number of reduce tasks per job. Typically set to a prime close to the number of available hosts. Ignored when mapred.job.tracker is "local".
mapred.map.max.attempts4Expert: The maximum number of attempts per map task. In other words, framework will try to execute a map task these many number of times before giving up on it.
mapred.reduce.max.attempts4Expert: The maximum number of attempts per reduce task. In other words, framework will try to execute a reduce task these many number of times before giving up on it.
mapred.reduce.parallel.copies5The default number of parallel transfers run by reduce during the copy(shuffle) phase.
mapred.task.timeout600000The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string.
mapred.tasktracker.tasks.maximum2The maximum number of tasks that will be run simultaneously by a task tracker.
mapred.child.java.opts-Xmx200mJava opts for the task tracker child processes. Subsumes 'mapred.child.heap.size' (If a mapred.child.heap.size value is found in a configuration, its maximum heap size will be used and a warning emitted that heap.size has been deprecated). Also, the following symbols, if present, will be interpolated: @taskid@ is replaced by current TaskID; and @port@ will be replaced by mapred.task.tracker.report.port + 1 (A second child will fail with a port-in-use if mapred.tasktracker.tasks.maximum is greater than one). Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc
mapred.inmem.merge.threshold1000The threshold, in terms of the number of files for the in-memory merge process. When we accumulate threshold number of files we initiate the in-memory merge and spill to disk. A value of 0 or less than 0 indicates we want to DON'T have any threshold and instead depend only on the ramfs's memory consumption to trigger the merge.
mapred.speculative.executionfalseIf true, then multiple instances of some map and reduce tasks may be executed in parallel.
mapred.min.split.size0The minimum size chunk that map input should be split into. Note that some file formats may have minimum split sizes that take priority over this setting.
mapred.submit.replication10The replication level for submitted job files. This should be around the square root of the number of nodes.
mapred.tasktracker.dns.interfacedefaultThe name of the Network Interface from which a task tracker should report its IP address.
mapred.tasktracker.dns.nameserverdefaultThe host name or IP address of the name server (DNS) which a TaskTracker should use to determine the host name used by the JobTracker for communication and display purposes.
tasktracker.http.threads40The number of worker threads that for the http server. This is used for map output fetching
tasktracker.http.bindAddress0.0.0.0 the address where the task tracker http server will be binded on.
tasktracker.http.port50060The default port for task trackers to use as their http server.
keep.failed.task.filesfalseShould the files for failed tasks be kept. This should only be used on jobs that are failing, because the storage is never reclaimed. It also prevents the map outputs from being erased from the reduce directory as they are consumed.
mapred.output.compressfalseShould the outputs of the reduces be compressed?
mapred.output.compression.codecorg.apache.hadoop.io.compress.DefaultCodecIf the reduce outputs are compressed, how should they be compressed?
mapred.compress.map.outputfalseShould the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression.
io.seqfile.compress.blocksize1000000The minimum block size for compression in block compressed SequenceFiles.
io.seqfile.lazydecompresstrueShould values of block-compressed SequenceFiles be decompressed only when necessary.
io.seqfile.sorter.recordlimit1000000The limit on number of records to be kept in memory in a spill in SequenceFiles.Sorter
io.seqfile.compression.typeRECORDThe default compression type for SequenceFile.Writer.
map.sort.classorg.apache.hadoop.mapred.MergeSorterThe default sort class for sorting keys.
mapred.userlog.num.splits4The number of fragments into which the user-log is to be split.
mapred.userlog.limit.kb100The maximum size of user-logs of each task.
mapred.userlog.purgesplitstrueShould the splits be purged disregarding the user-log size limit.
mapred.userlog.retain.hours12The maximum time, in hours, for which the user-logs are to be retained.
mapred.hostsNames a file that contains the list of nodes that may connect to the jobtracker. If the value is empty, all hosts are permitted.
mapred.hosts.excludeNames a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded.
mapred.max.tracker.failures4The number of task-failures on a tasktracker of a given job after which new tasks of that job aren't assigned to it.
jobclient.output.filterFAILEDThe filter for controlling the output of the task's userlogs sent to the console of the JobClient. The permissible options are: NONE, FAILED, SUCCEEDED and ALL.
ipc.client.timeout60000Defines the timeout for IPC calls in milliseconds.
ipc.client.idlethreshold4000Defines the threshold number of connections after which connections will be inspected for idleness.
ipc.client.maxidletime120000Defines the maximum idle time for a connected client after which it may be disconnected.
ipc.client.kill.max10Defines the maximum number of clients to disconnect in one go.
ipc.client.connection.maxidletime1000The maximum time after which a client will bring down the connection to the server.
ipc.client.connect.max.retries10Indicates the number of retries a client will make to establish a server connection.
ipc.server.listen.queue.size128Indicates the length of the listen queue for servers accepting client connections.
job.end.retry.attempts0Indicates how many times hadoop should attempt to contact the notification URL
job.end.retry.interval30000Indicates time in milliseconds between notification URL retry calls