Nochmals die Frage lautet: Wie kann ich eine ZIP csv Datei direkt von einer Online-Webseite extrahieren.
Mein Versuch:
Ich folge exakt dieser Anleitung und kopiere die Codes in mein Jupyterlab:
https://bodo-schoenfeld.de/csv-daten-mi ... net-laden/
Zuerst importiere ich numpy, panda (as pd), matplotlib.pyplot (as plt), request, io und zipfile.
Dann:
CSV_URL = "
https://www.covid19.admin.ch/api/data/2 ... es-csv.zip"
csv_data = requests.get(CSV_URL).content
Hier versuche ich den zipcode einzubauen und bekomme eine Fehlermeldung:
with zipfile (csv_data, "r") as zip:
zip.printdir()
zip.extractall()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-72844f47cec3> in <module>
----> 1 with zipfile (csv_data, "r") as zip:
2 zip.printdir()
3 zip.extractall()
4 df = pd.read_csv(io.StringIO(csv_data.decode("latin1")), sep=";")
TypeError: 'module' object is not callable
Danach versuche ich trotzdem die Datei zu lesen und bekomme eine Fehlermeldung:
df = pd.read_csv(io.StringIO(csv_data.decode("latin1")), sep=";")
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
<ipython-input-14-98be56f8bb14> in <module>
----> 1 df = pd.read_csv(io.StringIO(csv_data.decode("latin1")), sep=";")
~\anaconda3\lib\site-packages\pandas\io\parsers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
684 )
685
--> 686 return _read(filepath_or_buffer, kwds)
687
688
~\anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
456
457 try:
--> 458 data = parser.read(nrows)
459 finally:
460 parser.close()
~\anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows)
1194 def read(self, nrows=None):
1195 nrows = _validate_integer("nrows", nrows)
-> 1196 ret = self._engine.read(nrows)
1197
1198 # May alter columns / col_dict
~\anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows)
2153 def read(self, nrows=None):
2154 try:
-> 2155 data = self._reader.read(nrows)
2156 except StopIteration:
2157 if self._first_chunk:
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.read()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()
pandas\_libs\parsers.pyx in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 5 fields in line 160, saw 6