date parsing code for "jul 07, 2019" in pandas, python

  • Last Update :
  • Techknowledgy :

You need to change the format you specified in the dateparse:

dateparse = lambda dates: pd.datetime.strptime(dates, '%b %d, %Y')
data = pd.read_csv('C:\\doc.csv', parse_dates = ['date'], index_col = 'date', date_parser = dateparse)

For example:

>>> datetime.strptime('Jul 07, 2018', '%b %d, %Y')
datetime.datetime(2018, 7, 7, 0, 0) >>>
   datetime.strptime('Apr 07, 2018', '%b %d, %Y')
datetime.datetime(2018, 4, 7, 0, 0)

Suggestion : 2

You need to change the format you anycodings_date-parsing specified in the dateparse:,In short: the format is %b %d, %Y,I've been trying to parse the date of the anycodings_pandas form - Jul 07, 2018 to dd-mm-yyyy format for anycodings_pandas my financial time series project. But being anycodings_pandas new to Pandas, I am not able to do it the anycodings_pandas usual way i.e., using ,Add date points between separate dates in a dataframe and create blanks (NA) in the other columns were those newly rows were created in r

I've tried:

dateparse = lambda dates: pd.datetime.strptime(dates, '%m/%d/%Y')
data = pd.read_csv('C:\\doc.csv', parse_dates = ['date'], index_col = 'date', date_parser = dateparse)

Error is shown as:

ValueError: time data `Jul 07, 2019' does not match format '%m/%d/%Y'

You need to change the format you anycodings_date-parsing specified in the dateparse:

dateparse = lambda dates: pd.datetime.strptime(dates, '%b %d, %Y')
data = pd.read_csv('C:\\doc.csv', parse_dates = ['date'], index_col = 'date', date_parser = dateparse)

For example:

>>> datetime.strptime('Jul 07, 2018', '%b %d, %Y')
datetime.datetime(2018, 7, 7, 0, 0) >>>
   datetime.strptime('Apr 07, 2018', '%b %d, %Y')
datetime.datetime(2018, 4, 7, 0, 0)

Suggestion : 3

Date parsing code for "Jul 07, 2019" in pandas, Python,Handle exceptions and errors in date parsing in Python,Combine Date and Time columns using python pandas,Parsing date in JSON format using Python

you can try:

def custom_date_parser(x):
   return pd.to_datetime(x, format = '%Y-%m-%d %H:%M:%S.%f', errors = 'coerce')

#Finally:
   df = pd.read_csv('abc.csv', parse_dates = ['endTime'], date_parser = custom_date_parser)

Don't use date_parser at all and let pandas to manupulate the format:

df = pd.read_csv('abc.csv', parse_dates = ['endTime'])

By default it adds .000 , What is the exact error you are seeing .

import pandas as pd
df = pd.DataFrame({
   'date': ['2016-6-10 09:40:22.668',
      '2016-7-1 19:45:30.532',
      '2013-10-12 4:5:1'
   ],
   'value': [2, 3, 4]
})
df['date'] = pd.to_datetime(df['date'], format = "%Y-%m-%d %H:%M:%S.%f")
print(df)

o/p

         date value
         0 2016 - 06 - 10 09: 40: 22.668 2
         1 2016 - 07 - 01 19: 45: 30.532 3
         2 2013 - 10 - 12 04: 05: 01.000 4

Suggestion : 4

3 days ago Coding example for the question Remove unwanted parts from strings in a column-Pandas,Python ,I am looking for an efficient way to remove unwanted parts from strings in a DataFrame column., 4 days ago Feb 18, 2022  · To remove unwanted parts from strings in a column with Python Pandas, we can use the map method. data ['result'] = data ['result'].map (lambda x: x.lstrip ('+-').rstrip … , 1 week ago Remove unwanted parts from strings in a column — get the best Python ebooks for free. Machine Learning, Data Analysis with Python books for beginners ... Remove unwanted …


    time result 1 09: 00 + 52 A 2 10: 00 + 62 B 3 11: 00 + 44 a 4 12: 00 + 30 b 5 13: 00 - 110 a

data['result'] = data['result'].map(lambda x: x.lstrip('+-').rstrip('aAbBcC'))
data['result'] = data['result'].map(lambda x: x.lstrip('+-').rstrip('aAbBcC'))

Suggestion : 5

Here's a working code example: polidistmap.py, and here's an example of a working map: ,Moving or zooming the map is very, very slow: it relies on OpenGL support in the browser, which doesn't work well on Linux in general, or on a lot of graphics cards on any platform. , There were a lot of informal discussion sessions, people brainstorming and sharing ideas. Some of the most interesting ones I went to included , The talk will include a lot of history of Stonehenge and its construction, and a review of some other important or amusing henges around the world. But this article is on the astronomy, or lack thereof.

But Folium can't handle shapefiles, only GeoJSON. You can do that translation with a GDAL command:

ogr2ogr - t_srs EPSG: 4326 - f GeoJSON file.json file.shp

Or you can do it programmatically with the GDAL Python bindings:

def shapefile2geojson(infile, outfile, fieldname):
   ''
'Translate a shapefile to GEOJSON.'
''
options = gdal.VectorTranslateOptions(format = "GeoJSON",
   dstSRS = "EPSG:4326")
gdal.VectorTranslate(outfile, infile, options = options)

You can color the regions with a style function:

folium.GeoJson(jsonfile, style_function = style_fcn).add_to(m)

I wanted to let the user choose regions by clicking, but it turns out Folium doesn't have much support for that (it may be coming in a future release). You can do it by reading the GeoJSON yourself, splitting it into separate polygons and making them all separate Folium Polygons or GeoJSON objects, each with its own click behavior; but if you don't mind highlights and popups on mouseover instead of requiring a click, that's pretty easy. For highlighting in red whenever the user mouses over a polygon, set this highlight_function:

def highlight_fcn(x):
   return {
      'fillColor': '#ff0000'
   }

For tooltips:

tooltip = folium.GeoJsonTooltip(fields = ['NAME'])

But a few days after I'd gotten it all working, I realized none of it was needed for this project, because ... ta-DA — povray accepts this argument inside its camera section:

    angle 360

You do need to change povray's projection to cylindrical; the default is "perspective" which warps the images. If you set your look_at to point due south -- the first and second coordinates are the same as your observer coordinate, the third being zero so it's looking southward -- then povray will create a lovely strip starting at 0 degrees bearing (due north), and with south right in the middle. The camera section I ended up with was:

camera {
   cylinder 1

   location < 0.344444, 0.029620, 0.519048 >
      look_at < 0.344444, 0.029620, 0 >

      angle 360
}

Obviously, this wasn't something I wanted to calculate by hand, so I wrote a script for it: demproj.py. Run it with the name of a DEM file and the observer's coordinates:

demproj.py demfile.png 35.827 - 106.1803
convert -size 3600x600 xc:black \
    outfile000.png -geometry +0+0 -composite \
    outfile045.png -geometry +400+0 -composite \
    outfile090.png -geometry +800+0 -composite \
    outfile135.png -geometry +1200+0 -composite \
    outfile180.png -geometry +1600+0 -composite \
    outfile225.png -geometry +2000+0 -composite \
    outfile270.png -geometry +2400+0 -composite \
    outfile315.png -geometry +2800+0 -composite \
    out-composite.png
convert outfile * .png + smush - 400 out - smush.png

If you get the offsets perfect and want to know what they are so you can use them in ImageMagick or another program, use GIMP's Filters->Python-Fu->Console. This assumes the panorama image is the only one loaded in GIMP, otherwise you'll have to inspect gimp.image_list() to see where in the list your image is.

>>> img = gimp.image_list()[0] >>>
   for layer in img.layers:
   ...print layer.name, layer.offsets

For a test, I downloaded some data that includes the peaks I can see from White Rock in the local Jemez and Sangre de Cristo mountains.

wget - O mountains.tif 'http://opentopo.sdsc.edu/otr/getdem?demtype=SRTMGL3&west=-106.8&south=35.1&east=-105.0&north=36.5&outputFormat=GTiff'

Create a hillshade to make sure it looks like the right region:

gdaldem hillshade mountains.tif hillshade.png
pho hillshade.png

Sanity check: do the lowest and highest elevations look right? Let's look in both meters and feet, using the tricks from Part I.

>>>
import gdal
   >>>
   import numpy as np

   >>>
   demdata = gdal.Open('mountains.tif') >>>
   demarray = np.array(demdata.GetRasterBand(1).ReadAsArray()) >>>
   demarray.min(), demarray.max()
   (1501, 3974) >>>
   print([x * 3.2808399
      for x in (demarray.min(), demarray.max())
   ])[4924.5406899, 13038.057762600001]

While you're here, check the image width and height. You'll need it later.

>>> demarray.shape(1680, 2160)

Let's pick a viewing spot: Overlook Point in White Rock (marked with the yellow cross on the image above). Its coordinates are -106.1803, 35.827. What are the pixel coordinates? Using the formula from the end of Part I:

>>>
import affine
   >>>
   affine_transform = affine.Affine.from_gdal( * demdata.GetGeoTransform()) >>>
   inverse_transform = ~affine_transform >>>
   [round(f) for f in inverse_transform * (-106.1803, 35.827)]
   [744, 808]

Before you can do anything, convert the DEM file to a 16-bit greyscale PNG, the only format povray accepts for what it calls height fields:

gdal_translate - ot UInt16 - of PNG demfile.tif demfile.png

Now create a .pov file, which will look something like this:

camera {
   location < .5, .5, 2 >
      look_at < .5, .6, 0 >
}

light_source {
   < 0, 2, 1 > color < 1, 1, 1 >
}

height_field {
   png "YOUR_DEM_FILE.png"

   smooth
   pigment {
      gradient y
      color_map {
         [0 color < .5 .5 .5 > ]
         [1 color < 1 1 1 > ]
      }
   }

   scale < 1, 1, 1 >
}

Imagine your DEM tilting forward to lie flat in front of you: the bottom (southern) edge of your DEM image corresponds to 0 forward, whereas the top (northern) edge is 1 forward. 0 in the first coordinate is the western edge, 1 is the eastern. So, for instance, if you want to put the virtual camera at the middle of the bottom (south) edge of your DEM and look straight north and horizontally, neither up nor down, you'd want:

    location < .5, HEIGHT, 0 >
       look_at < .5, HEIGHT, 1 >

Once you have a .pov file with the right camera and light source, you can run povray like this:

povray + A + W800 + H600 + Idemfile.pov + Orendered.png

For my bowling-pin image, it turned out looking northward (upward) from the south (the bottom of the image) didn't work, because the pillar at the point of the triangle blocked everything else. It turned out to be more useful to put the camera beyond the top (north side) of the image and look southward, back toward the image.

    location < .5, HEIGHT, 2 >
       look_at < .5, HEIGHT, 0 >

A better way to check DEM data files is a beautiful little program called gdaldem. It has several options, like generating a hillshade image:

gdaldem hillshade n35_w107_1arc_v3.tif hillshade.png

gdal has lots more useful stuff beyond gdaldem. For instance, my ultimate goal, ray tracing, will need a PNG:

gdal_translate - ot UInt16 - of PNG srtm_54_07.tif srtm_54_07.png

What's the highest point in your data, and at what coordinates does that peak occur? You can find the highest and lowest points easily with Python's gdal package if you convert the gdal.Dataset into a numpy array:

import gdal
import numpy as np

demdata = gdal.Open(filename)
demarray = np.array(demdata.GetRasterBand(1).ReadAsArray())
print(demarray.min(), demarray.max())

But now that you have the pixel coordinates of the high point, how do you map that back to latitude and longitude? That's trickier, but here's one way, using the affine package:

import affine

affine_transform = affine.Affine.from_gdal( * demdata.GetGeoTransform())
lon, lat = affine_transform * (xmax, ymax)

What about the other way? You have latitude and longitude and you want to know what pixel location that corresponds to? Define an inverse transformation:

inverse_transform = ~affine_transform
px, py = [round(f) for f in inverse_transform * (lon, lat)]

QGIS has a plug-in called QTiles but it didn't work for me: it briefly displayed a progress bar which then disappeared without creating any files. Fortunately, you can do the conversion much more easily with gdal_translate, which at least on Debian is part of the gdal-bin package.

gdal_translate filename.tiff filename.mbtiles

That will create tiles for a limited range of zoom levels (maybe only one zoom level). gdalinfo will tell you the zoom levels in the file. If you want to be able to zoom out and still see your overlay, you might want to add wider zoom levels, which you can do like this:

gdaladdo - r nearest filename.mbtiles 2 4 8 16

Incidentally, gdal can also create a directory of tiles suitable for a web slippy map, though you don't need that for OsmAnd. For that, use gdal2tiles, which on Debian is part of the python-gdal package:

mkdir tiles
gdal2tiles filename.tiff tiles