Conteo de palabras en Apache Pig#
Última modificación: Mayo 16, 2021 | YouTube
Archivos de prueba#
A continuación se generarán tres archivos de prueba para probar el sistema. Puede usar directamente comandos del sistema operativo en el Terminal y el editor de texto pico
para crear los archivos.
[1]:
!rm -rf /tmp/wordcount
!mkdir -p /tmp/wordcount/input/
%cd /tmp/wordcount
/tmp/wordcount
[2]:
%%writefile input/text0.txt
Analytics is the discovery, interpretation, and communication of meaningful patterns
in data. Especially valuable in areas rich with recorded information, analytics relies
on the simultaneous application of statistics, computer programming and operations research
to quantify performance.
Organizations may apply analytics to business data to describe, predict, and improve business
performance. Specifically, areas within analytics include predictive analytics, prescriptive
analytics, enterprise decision management, descriptive analytics, cognitive analytics, Big
Data Analytics, retail analytics, store assortment and stock-keeping unit optimization,
marketing optimization and marketing mix modeling, web analytics, call analytics, speech
analytics, sales force sizing and optimization, price and promotion modeling, predictive
science, credit risk analysis, and fraud analytics. Since analytics can require extensive
computation (see big data), the algorithms and software used for analytics harness the most
current methods in computer science, statistics, and mathematics.
Writing input/text0.txt
[3]:
%%writefile input/text1.txt
The field of data analysis. Analytics often involves studying past historical data to
research potential trends, to analyze the effects of certain decisions or events, or to
evaluate the performance of a given tool or scenario. The goal of analytics is to improve
the business by gaining knowledge which can be used to make improvements or changes.
Writing input/text1.txt
[4]:
%%writefile input/text2.txt
Data analytics (DA) is the process of examining data sets in order to draw conclusions
about the information they contain, increasingly with the aid of specialized systems
and software. Data analytics technologies and techniques are widely used in commercial
industries to enable organizations to make more-informed business decisions and by
scientists and researchers to verify or disprove scientific models, theories and
hypotheses.
Writing input/text2.txt
[5]:
!ls -1 input/
text0.txt
text1.txt
text2.txt
Conteo de palabras en modo local (escritura y depuración del programa)#
Nota. Se usan los dos guiones --
para comentario de una línea y /*
… */
para comentarios de varias líneas.
[6]:
%%writefile wordcount-local.pig
-- carga de datos desde la carpeta local
lines = LOAD 'input/text*.txt' AS (line:CHARARRAY);
-- genera una tabla llamada words con una palabra por registro
words = FOREACH lines GENERATE FLATTEN(TOKENIZE(line)) AS word;
-- agrupa los registros que tienen la misma palabra
grouped = GROUP words BY word;
-- genera una variable que cuenta las ocurrencias por cada grupo
wordcount = FOREACH grouped GENERATE group, COUNT(words);
-- selecciona las primeras 15 palabras
s = LIMIT wordcount 15;
-- escribe el archivo de salida en el sistema local
STORE s INTO 'output';
Writing wordcount-local.pig
[7]:
#
# Archivos en la carpeta local
#
!ls -l
total 8
drwxr-xr-x 2 root root 4096 Jun 3 14:53 input
-rw-r--r-- 1 root root 570 Jun 3 14:53 wordcount-local.pig
[8]:
#
# Ejecución en modo local (no pseudo ni distribuido (cluster))
#
!pig -x local -execute 'run wordcount-local.pig'
2022-06-03 14:53:40,434 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
2022-06-03 14:53:40,555 [JobControl] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:40,589 [JobControl] WARN org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2022-06-03 14:53:40,601 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input files to process : 3
2022-06-03 14:53:40,627 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2022-06-03 14:53:40,740 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local1500906503_0001
2022-06-03 14:53:40,816 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/
2022-06-03 14:53:40,817 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null
2022-06-03 14:53:40,834 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2022-06-03 14:53:40,834 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2022-06-03 14:53:40,835 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputCommitter
2022-06-03 14:53:40,860 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for map tasks
2022-06-03 14:53:40,860 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1500906503_0001_m_000000_0
2022-06-03 14:53:40,884 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2022-06-03 14:53:40,885 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2022-06-03 14:53:40,896 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : [ ]
2022-06-03 14:53:40,901 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Processing split: Number of splits :3
Total Length = 1885
Input split[0]:
Length = 1093
ClassName: org.apache.hadoop.mapreduce.lib.input.FileSplit
Locations:
-----------------------
Input split[1]:
Length = 440
ClassName: org.apache.hadoop.mapreduce.lib.input.FileSplit
Locations:
-----------------------
Input split[2]:
Length = 352
ClassName: org.apache.hadoop.mapreduce.lib.input.FileSplit
Locations:
-----------------------
2022-06-03 14:53:40,932 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - (EQUATOR) 0 kvi 26214396(104857584)
2022-06-03 14:53:40,932 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - mapreduce.task.io.sort.mb: 100
2022-06-03 14:53:40,932 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - soft limit at 83886080
2022-06-03 14:53:40,932 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 104857600
2022-06-03 14:53:40,932 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 6553600
2022-06-03 14:53:40,936 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2022-06-03 14:53:40,973 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner -
2022-06-03 14:53:40,973 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output
2022-06-03 14:53:40,973 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Spilling map output
2022-06-03 14:53:40,973 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufend = 3602; bufvoid = 104857600
2022-06-03 14:53:40,973 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396(104857584); kvend = 26213388(104853552); length = 1009/6553600
2022-06-03 14:53:41,011 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Finished spill 0
2022-06-03 14:53:41,014 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Task:attempt_local1500906503_0001_m_000000_0 is done. And is in the process of committing
2022-06-03 14:53:41,020 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - map
2022-06-03 14:53:41,021 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local1500906503_0001_m_000000_0' done.
2022-06-03 14:53:41,027 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Final Counters for attempt_local1500906503_0001_m_000000_0: Counters: 18
File System Counters
FILE: Number of bytes read=2415
FILE: Number of bytes written=509762
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=24
Map output records=253
Map output bytes=3602
Map output materialized bytes=2687
Input split bytes=474
Combine input records=253
Combine output records=155
Spilled Records=155
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=0
Total committed heap usage (bytes)=371720192
File Input Format Counters
Bytes Read=0
2022-06-03 14:53:41,028 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local1500906503_0001_m_000000_0
2022-06-03 14:53:41,029 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2022-06-03 14:53:41,031 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for reduce tasks
2022-06-03 14:53:41,032 [pool-4-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1500906503_0001_r_000000_0
2022-06-03 14:53:41,044 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2022-06-03 14:53:41,044 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2022-06-03 14:53:41,046 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : [ ]
2022-06-03 14:53:41,050 [pool-4-thread-1] INFO org.apache.hadoop.mapred.ReduceTask - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@3294d701
2022-06-03 14:53:41,063 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - MergerManager: memoryLimit=652528832, maxSingleShuffleLimit=163132208, mergeThreshold=430669056, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2022-06-03 14:53:41,065 [EventFetcher for fetching Map Completion Events] INFO org.apache.hadoop.mapreduce.task.reduce.EventFetcher - attempt_local1500906503_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2022-06-03 14:53:41,082 [localfetcher#1] INFO org.apache.hadoop.mapreduce.task.reduce.LocalFetcher - localfetcher#1 about to shuffle output of map attempt_local1500906503_0001_m_000000_0 decomp: 2683 len: 2687 to MEMORY
2022-06-03 14:53:41,086 [localfetcher#1] INFO org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput - Read 2683 bytes from map-output for attempt_local1500906503_0001_m_000000_0
2022-06-03 14:53:41,087 [localfetcher#1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - closeInMemoryFile -> map-output of size: 2683, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->2683
2022-06-03 14:53:41,088 [EventFetcher for fetching Map Completion Events] INFO org.apache.hadoop.mapreduce.task.reduce.EventFetcher - EventFetcher is interrupted.. Returning
2022-06-03 14:53:41,088 [pool-4-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2022-06-03 14:53:41,088 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2022-06-03 14:53:41,092 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2022-06-03 14:53:41,092 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 2677 bytes
2022-06-03 14:53:41,094 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merged 1 segments, 2683 bytes to disk to satisfy reduce memory limit
2022-06-03 14:53:41,094 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 1 files, 2687 bytes from disk
2022-06-03 14:53:41,094 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 0 segments, 0 bytes from memory into reduce
2022-06-03 14:53:41,095 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2022-06-03 14:53:41,095 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 2677 bytes
2022-06-03 14:53:41,095 [pool-4-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2022-06-03 14:53:41,099 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2022-06-03 14:53:41,099 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2022-06-03 14:53:41,111 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Task - Task:attempt_local1500906503_0001_r_000000_0 is done. And is in the process of committing
2022-06-03 14:53:41,114 [pool-4-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2022-06-03 14:53:41,114 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Task - Task attempt_local1500906503_0001_r_000000_0 is allowed to commit now
2022-06-03 14:53:41,116 [pool-4-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_local1500906503_0001_r_000000_0' to file:/tmp/temp-103408550/tmp1846501450/_temporary/0/task_local1500906503_0001_r_000000
2022-06-03 14:53:41,117 [pool-4-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
2022-06-03 14:53:41,117 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local1500906503_0001_r_000000_0' done.
2022-06-03 14:53:41,117 [pool-4-thread-1] INFO org.apache.hadoop.mapred.Task - Final Counters for attempt_local1500906503_0001_r_000000_0: Counters: 24
File System Counters
FILE: Number of bytes read=7821
FILE: Number of bytes written=512668
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Combine input records=0
Combine output records=0
Reduce input groups=155
Reduce shuffle bytes=2687
Reduce input records=155
Reduce output records=15
Spilled Records=155
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=0
Total committed heap usage (bytes)=371720192
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Output Format Counters
Bytes Written=0
2022-06-03 14:53:41,117 [pool-4-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local1500906503_0001_r_000000_0
2022-06-03 14:53:41,117 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce task executor complete.
2022-06-03 14:53:41,327 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,336 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,337 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,365 [JobControl] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,369 [JobControl] WARN org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2022-06-03 14:53:41,377 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input files to process : 1
2022-06-03 14:53:41,379 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2022-06-03 14:53:41,389 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local511516572_0002
2022-06-03 14:53:41,442 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/
2022-06-03 14:53:41,442 [Thread-16] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null
2022-06-03 14:53:41,446 [Thread-16] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2022-06-03 14:53:41,446 [Thread-16] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2022-06-03 14:53:41,446 [Thread-16] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputCommitter
2022-06-03 14:53:41,449 [Thread-16] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for map tasks
2022-06-03 14:53:41,449 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local511516572_0002_m_000000_0
2022-06-03 14:53:41,453 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2022-06-03 14:53:41,453 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2022-06-03 14:53:41,454 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : [ ]
2022-06-03 14:53:41,455 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Processing split: Number of splits :1
Total Length = 207
Input split[0]:
Length = 207
ClassName: org.apache.hadoop.mapreduce.lib.input.FileSplit
Locations:
-----------------------
2022-06-03 14:53:41,463 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - (EQUATOR) 0 kvi 26214396(104857584)
2022-06-03 14:53:41,463 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - mapreduce.task.io.sort.mb: 100
2022-06-03 14:53:41,463 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - soft limit at 83886080
2022-06-03 14:53:41,463 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 104857600
2022-06-03 14:53:41,463 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 6553600
2022-06-03 14:53:41,464 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2022-06-03 14:53:41,468 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner -
2022-06-03 14:53:41,468 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output
2022-06-03 14:53:41,468 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Spilling map output
2022-06-03 14:53:41,468 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufend = 237; bufvoid = 104857600
2022-06-03 14:53:41,468 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396(104857584); kvend = 26214340(104857360); length = 57/6553600
2022-06-03 14:53:41,469 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Finished spill 0
2022-06-03 14:53:41,469 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Task:attempt_local511516572_0002_m_000000_0 is done. And is in the process of committing
2022-06-03 14:53:41,470 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - map
2022-06-03 14:53:41,470 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local511516572_0002_m_000000_0' done.
2022-06-03 14:53:41,471 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Final Counters for attempt_local511516572_0002_m_000000_0: Counters: 17
File System Counters
FILE: Number of bytes read=8477
FILE: Number of bytes written=1006753
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=15
Map output records=15
Map output bytes=237
Map output materialized bytes=273
Input split bytes=378
Combine input records=0
Spilled Records=15
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=0
Total committed heap usage (bytes)=371720192
File Input Format Counters
Bytes Read=0
2022-06-03 14:53:41,471 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local511516572_0002_m_000000_0
2022-06-03 14:53:41,471 [Thread-16] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2022-06-03 14:53:41,472 [Thread-16] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for reduce tasks
2022-06-03 14:53:41,472 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local511516572_0002_r_000000_0
2022-06-03 14:53:41,476 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2022-06-03 14:53:41,476 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2022-06-03 14:53:41,478 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : [ ]
2022-06-03 14:53:41,478 [pool-9-thread-1] INFO org.apache.hadoop.mapred.ReduceTask - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@246e8222
2022-06-03 14:53:41,478 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - MergerManager: memoryLimit=652528832, maxSingleShuffleLimit=163132208, mergeThreshold=430669056, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2022-06-03 14:53:41,479 [EventFetcher for fetching Map Completion Events] INFO org.apache.hadoop.mapreduce.task.reduce.EventFetcher - attempt_local511516572_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2022-06-03 14:53:41,480 [localfetcher#2] INFO org.apache.hadoop.mapreduce.task.reduce.LocalFetcher - localfetcher#2 about to shuffle output of map attempt_local511516572_0002_m_000000_0 decomp: 269 len: 273 to MEMORY
2022-06-03 14:53:41,480 [localfetcher#2] INFO org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput - Read 269 bytes from map-output for attempt_local511516572_0002_m_000000_0
2022-06-03 14:53:41,480 [localfetcher#2] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - closeInMemoryFile -> map-output of size: 269, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->269
2022-06-03 14:53:41,481 [EventFetcher for fetching Map Completion Events] INFO org.apache.hadoop.mapreduce.task.reduce.EventFetcher - EventFetcher is interrupted.. Returning
2022-06-03 14:53:41,481 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2022-06-03 14:53:41,481 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2022-06-03 14:53:41,482 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2022-06-03 14:53:41,482 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 256 bytes
2022-06-03 14:53:41,483 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merged 1 segments, 269 bytes to disk to satisfy reduce memory limit
2022-06-03 14:53:41,483 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 1 files, 273 bytes from disk
2022-06-03 14:53:41,483 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 0 segments, 0 bytes from memory into reduce
2022-06-03 14:53:41,483 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2022-06-03 14:53:41,484 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 256 bytes
2022-06-03 14:53:41,484 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2022-06-03 14:53:41,485 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2022-06-03 14:53:41,485 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2022-06-03 14:53:41,490 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Task:attempt_local511516572_0002_r_000000_0 is done. And is in the process of committing
2022-06-03 14:53:41,492 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2022-06-03 14:53:41,492 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Task attempt_local511516572_0002_r_000000_0 is allowed to commit now
2022-06-03 14:53:41,494 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_local511516572_0002_r_000000_0' to file:/tmp/wordcount/output/_temporary/0/task_local511516572_0002_r_000000
2022-06-03 14:53:41,494 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
2022-06-03 14:53:41,494 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local511516572_0002_r_000000_0' done.
2022-06-03 14:53:41,494 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Final Counters for attempt_local511516572_0002_r_000000_0: Counters: 24
File System Counters
FILE: Number of bytes read=9055
FILE: Number of bytes written=1007119
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Combine input records=0
Combine output records=0
Reduce input groups=15
Reduce shuffle bytes=273
Reduce input records=15
Reduce output records=15
Spilled Records=15
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=0
Total committed heap usage (bytes)=371720192
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Output Format Counters
Bytes Written=0
2022-06-03 14:53:41,494 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local511516572_0002_r_000000_0
2022-06-03 14:53:41,494 [Thread-16] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce task executor complete.
2022-06-03 14:53:41,644 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,645 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,646 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,652 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,652 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,653 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,658 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,659 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2022-06-03 14:53:41,660 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
[9]:
#
# Archivos en la carpeta local
#
!ls -l
total 12
drwxr-xr-x 2 root root 4096 Jun 3 14:53 input
drwxr-xr-x 2 root root 4096 Jun 3 14:53 output
-rw-r--r-- 1 root root 570 Jun 3 14:53 wordcount-local.pig
[10]:
#
# Resultados obtenidos
#
!ls -l output/
total 4
-rw-r--r-- 1 root root 0 Jun 3 14:53 _SUCCESS
-rw-r--r-- 1 root root 81 Jun 3 14:53 part-r-00000
[11]:
#
# Contenido de part-r-*
#
!cat output/part-r-*
a 1
DA 1
be 1
by 2
in 5
is 3
of 8
on 1
or 5
to 12
Big 1
The 2
aid 1
and 15
are 1
Conteo de palabras en modo pseudo-distribuido (cluster)#
[12]:
%%writefile wordcount-pseudo.pig
-- borra las carpetas si existen
fs -rm -r input output
-- crea la carpeta input in el HDFS
fs -mkdir input
-- copia los archivos del sistema local al HDFS
fs -put input/ .
-- carga de datos
lines = LOAD 'input/text*.txt' AS (line:CHARARRAY);
-- genera una tabla llamada words con una palabra por registro
words = FOREACH lines GENERATE FLATTEN(TOKENIZE(line)) AS word;
-- agrupa los registros que tienen la misma palabra
grouped = GROUP words BY word;
-- genera una variable que cuenta las ocurrencias por cada grupo
wordcount = FOREACH grouped GENERATE group, COUNT(words);
-- selecciona las primeras 15 palabras
s = LIMIT wordcount 15;
-- escribe el archivo de salida en el HDFS
STORE s INTO 'output';
-- copia los archivos del HDFS al sistema local
fs -get output/
Writing wordcount-pseudo.pig
[13]:
#
# Ejecución en modo local (no pseudo ni distribuido (cluster))
#
!rm -rf output/
!pig -execute 'run wordcount-pseudo.pig'
Deleted input
Deleted output
2022-06-03 14:53:45,437 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:53:45,696 [JobControl] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:53:45,764 [JobControl] WARN org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2022-06-03 14:53:45,780 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input files to process : 3
2022-06-03 14:53:45,823 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2022-06-03 14:53:45,974 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1654265746122_0005
2022-06-03 14:53:46,066 [JobControl] INFO org.apache.hadoop.mapred.YARNRunner - Job jar is not present. Not adding any jar to the list of resources.
2022-06-03 14:53:46,113 [JobControl] INFO org.apache.hadoop.conf.Configuration - resource-types.xml not found
2022-06-03 14:53:46,114 [JobControl] INFO org.apache.hadoop.yarn.util.resource.ResourceUtils - Unable to find 'resource-types.xml'.
2022-06-03 14:53:46,117 [JobControl] INFO org.apache.hadoop.yarn.util.resource.ResourceUtils - Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
2022-06-03 14:53:46,117 [JobControl] INFO org.apache.hadoop.yarn.util.resource.ResourceUtils - Adding resource type - name = vcores, units = , type = COUNTABLE
2022-06-03 14:53:46,156 [JobControl] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1654265746122_0005
2022-06-03 14:53:46,181 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://3dace13b7f0d:8088/proxy/application_1654265746122_0005/
2022-06-03 14:54:01,316 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:01,323 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:01,413 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:01,418 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:01,440 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:01,444 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:01,589 [JobControl] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:01,599 [JobControl] WARN org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2022-06-03 14:54:01,611 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input files to process : 1
2022-06-03 14:54:01,633 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2022-06-03 14:54:01,664 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1654265746122_0006
2022-06-03 14:54:01,667 [JobControl] INFO org.apache.hadoop.mapred.YARNRunner - Job jar is not present. Not adding any jar to the list of resources.
2022-06-03 14:54:01,692 [JobControl] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1654265746122_0006
2022-06-03 14:54:01,697 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://3dace13b7f0d:8088/proxy/application_1654265746122_0006/
2022-06-03 14:54:16,711 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,714 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:16,759 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,763 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:16,781 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,784 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:16,806 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,808 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:16,824 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,827 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:16,842 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,844 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:16,861 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,864 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:16,878 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,880 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2022-06-03 14:54:16,894 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2022-06-03 14:54:16,897 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
[14]:
#
# Contenido del HDFS
#
!hdfs dfs -ls output/*
-rw-r--r-- 1 root supergroup 0 2022-06-03 14:54 output/_SUCCESS
-rw-r--r-- 1 root supergroup 81 2022-06-03 14:54 output/part-r-00000
[15]:
#
# Resultados ontenidos en el HDFS
#
!hdfs dfs -cat output/part-r-00000
a 1
DA 1
be 1
by 2
in 5
is 3
of 8
on 1
or 5
to 12
Big 1
The 2
aid 1
and 15
are 1
[16]:
#
# Resultados obtenidos en la máquina local
#
!ls -l
total 16
drwxr-xr-x 2 root root 4096 Jun 3 14:53 input
drwxr-xr-x 2 root root 4096 Jun 3 14:54 output
-rw-r--r-- 1 root root 570 Jun 3 14:53 wordcount-local.pig
-rw-r--r-- 1 root root 780 Jun 3 14:53 wordcount-pseudo.pig
[17]:
!ls -l output/
total 4
-rw-r--r-- 1 root root 0 Jun 3 14:54 _SUCCESS
-rw-r--r-- 1 root root 81 Jun 3 14:54 part-r-00000
[18]:
#
# Contenido de part-r-*
#
!cat output/part-r-*
a 1
DA 1
be 1
by 2
in 5
is 3
of 8
on 1
or 5
to 12
Big 1
The 2
aid 1
and 15
are 1
Ejecución de e scripts desde Grunt (consola de Apache Pig)#
Se realiza con los comandos exec
y run
.
grunt> exec script
grunt> run script
La diferencia entre estos comandos es que exec
ejecuta el script sin importalo a grunt
mientras que run
si lo hace.