Conteo de palabras con map() y Counter()#
Última modificación: Mayo 14, 2022
Adaptado del libro “Mastering Large Datasets with Python”.
Archivos de prueba#
[1]:
!rm -rf /tmp/input /tmp/output
!mkdir /tmp/input
[2]:
%%writefile /tmp/input/text0.txt
Analytics is the discovery, interpretation, and communication of meaningful patterns
in data. Especially valuable in areas rich with recorded information, analytics relies
on the simultaneous application of statistics, computer programming and operations research
to quantify performance.
Organizations may apply analytics to business data to describe, predict, and improve business
performance. Specifically, areas within analytics include predictive analytics, prescriptive
analytics, enterprise decision management, descriptive analytics, cognitive analytics, Big
Data Analytics, retail analytics, store assortment and stock-keeping unit optimization,
marketing optimization and marketing mix modeling, web analytics, call analytics, speech
analytics, sales force sizing and optimization, price and promotion modeling, predictive
science, credit risk analysis, and fraud analytics. Since analytics can require extensive
computation (see big data), the algorithms and software used for analytics harness the most
current methods in computer science, statistics, and mathematics.
The field of data analysis. Analytics often involves studying past historical data to
research potential trends, to analyze the effects of certain decisions or events, or to
evaluate the performance of a given tool or scenario. The goal of analytics is to improve
the business by gaining knowledge which can be used to make improvements or changes.
Data analytics (DA) is the process of examining data sets in order to draw conclusions
about the information they contain, increasingly with the aid of specialized systems
and software. Data analytics technologies and techniques are widely used in commercial
industries to enable organizations to make more-informed business decisions and by
scientists and researchers to verify or disprove scientific models, theories and
hypotheses.
Writing /tmp/input/text0.txt
[3]:
import shutil
for i in range(1, 10000):
shutil.copy("/tmp/input/text0.txt", f"/tmp/input/text{i}.txt")
Lectura de los archivos línea por línea#
[4]:
import fileinput
import glob
import os
def load_data(file_path):
# -----------------------------------------------------------------------------------
def make_iterator_from_single_file(file_path):
with open(file_path, "rt") as file:
for line in file:
yield line
# -----------------------------------------------------------------------------------
def make_iterator_from_multiple_files(file_path):
file_path = os.path.join(file_path, "*")
files = glob.glob(file_path)
with fileinput.input(files=files) as file:
for line in file:
yield line
# -----------------------------------------------------------------------------------
if os.path.isfile(file_path):
return make_iterator_from_single_file(file_path)
return make_iterator_from_multiple_files(file_path)
Funciones para preprocesamiento#
[5]:
import string
def tolower(x):
return x.lower()
def remove_punctuation(x):
return x.translate(str.maketrans("", "", string.punctuation))
def remove_newline(x):
return x.replace("\n", "")
def split_lines(x):
return x.split()
Encadenamiento de funciones map()#
[6]:
from toolz.itertoolz import concat
result = map(
tolower,
map(
remove_punctuation,
map(
remove_newline,
concat(
map(
split_lines,
load_data("/tmp/input/"),
)
),
),
),
)
list(result)[:5]
[6]:
['analytics', 'is', 'the', 'discovery', 'interpretation']
Multiprocessing#
[7]:
from multiprocessing import Pool
with Pool() as pool:
result = pool.map(split_lines, load_data("/tmp/input/"))
result = concat(result)
result = pool.map(remove_newline, result)
result = pool.map(remove_punctuation, result)
result = pool.map(tolower, result)
result[:5]
[7]:
['analytics', 'is', 'the', 'discovery', 'interpretation']
Tuberías de funciones con compose()#
[8]:
from toolz.functoolz import compose
compose_pipeline = compose(
remove_punctuation,
tolower,
remove_newline,
)
with Pool() as pool:
result = pool.map(split_lines, load_data("/tmp/input/"))
result = concat(result)
result = pool.map(compose_pipeline, result)
result[:5]
[8]:
['analytics', 'is', 'the', 'discovery', 'interpretation']
Comparación de tiempos#
[9]:
%%timeit
result = map(
tolower,
map(
remove_punctuation,
map(
remove_newline,
concat(
map(
split_lines,
load_data("/tmp/input/"),
)
),
),
),
)
result = list(result)
4.6 s ± 94.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[10]:
%%timeit
with Pool() as pool:
result = pool.map(split_lines, load_data("/tmp/input/"))
result = concat(result)
result = pool.map(remove_newline, result)
result = pool.map(remove_punctuation, result)
result = pool.map(tolower, result)
3.56 s ± 372 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[11]:
%%timeit
with Pool() as pool:
result = pool.map(split_lines, load_data("/tmp/input/"))
result = concat(result)
result = pool.map(compose_pipeline, result)
2.19 s ± 100 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Conteo de palabras#
[12]:
from collections import Counter
result = map(
tolower,
map(
remove_punctuation,
map(
remove_newline,
concat(
map(
split_lines,
load_data("/tmp/input/"),
)
),
),
),
)
Counter(result).most_common(10)
[12]:
[('analytics', 200000),
('and', 150000),
('the', 120000),
('to', 120000),
('data', 90000),
('of', 80000),
('in', 50000),
('or', 50000),
('business', 40000),
('is', 30000)]