object has no attribute '__getitem__'

Wenn du dir nicht sicher bist, in welchem der anderen Foren du die Frage stellen sollst, dann bist du hier im Forum für allgemeine Fragen sicher richtig.
Antworten
Pheno
User
Beiträge: 1
Registriert: Donnerstag 11. Juni 2015, 10:04

Hallo!

Ich bekomme bei meinem Programm immer die selbe Fehlermeldung und weiß nicht mehr weiter! :/
Wär super wenn mir wer helfen kann.

Fehlermeldung: File "./main_pheno.py", line 569, in stat_per_phase
valid_stations[pheno_phase] = valid_stations[pheno_phase] + 1

TypeError: 'float' object has no attribute '__getitem__'

Code: Alles auswählen

def stat_per_phase(log_file):

   
################################################################
# Data extraction
    
    # Extract observed pheno data at stations
    #indexBegin, indexEnd, obs_pollen_dataStationName, obs_pollen_data = data_extraction_station()
    
    # Select a phenological phase
    pheno_phase = 0
    #pheno_phase = 0.0
    num_of_year_days = 365
    obs_pheno_data = data_extraction_station(pheno_phase)
    valid_station_name = np.empty([obs_pheno_data.dimStationLength], dtype='S8')
    valid_stations = np.empty([1,obs_pheno_data.dimPhenoLength],dtype=np.float32)
    #valid_stations = np.empty([1,obs_pheno_data.dimPhenoLength],dtype='int')#
    
    
     ### instance of the class StationData for pheno data with the name obs_pheno_data ###
    
     #obs_pheno_data = read_data.StationData(fileName="/perm/umwe/umwguest/ingrid_meran/pheno/netcdf_pep725/phaeno_data_raw_5.nc", \
     #   dimStationName="dim_stat", dimPhenoName="dim_pheno", dimTimeName="time", \
     #   longitudeName="longitude", latitudeName="latitude", altitudeName="altitude", timeName="time", \
	   #   varName="pheno_date", stationName="stat_name", cntrcdName="cntrcd", \
	   #   phenoIDName="pheno_id", speciesName="species_name", phenoPhaseName="pheno_phase_name", \
     #   dimStationLength=999, dimPhenoLength=999, dimTimeLength=999, species_name="",pheno_phase="", \
	   #   pheno_id=999, stations="asdf", station_name="asdf", cntrcd="asd",  \
	   #   longitude=999.9, latitude=999.9, altitude=999.9, \
	   #   time="999", variable=999.9)
    
    
    ### obs_pheno_data.dimPhenoLength: Alle Informationen die in obs_pheno_data enhalten sind ist dimPhenoLength ###
    
    print 'obs_pheno_data',obs_pheno_data.dimPhenoLength
    
    valid_stations = 0.0
    #pheno_phase = 0
    pheno_phase = 0.0
    #while pheno_phase < obs_pheno_data.dimPhenoLength:
    while pheno_phase < 3:
    
        print ' check pheno _phase ', pheno_phase

        # Select a phenological phase and extract data
        obs_pheno_data = data_extraction_station(pheno_phase)


        print ' check dimensions ',obs_pheno_data.dimStationLength, obs_pheno_data.dimPhenoLength, obs_pheno_data.dimTimeLength
        print>>log_file, ' '
        print>>log_file, ' '
        print>>log_file, ' phase ',pheno_phase, obs_pheno_data.phenoPhaseName
        print>>log_file, ' '

        # Mask all invalid/missing pheno observations
        ### Falls Daten innerhalb von 999.8 und 1000 (Wert) liegt werden sie maskiert (TURE/FALS) ###
        ### .variable: Alle Infomationen die enthalten sind.
        obs_pheno_data.variable = ma.masked_inside(obs_pheno_data.variable, 999.98, 1000.)
        print 'check mask:', obs_pheno_data.variable.shape
        print 'check mask:', obs_pheno_data.variable.mask
        
        #number_of_valid_values = ma.count(obs_pheno_data.variable[:,0])
        #print>>log_file, ' data count valid     ',station,number_of_valid_values
        #number_of_invalid_values = ma.count_masked(obs_pheno_data.variable[:,0)
        #print>>log_file, ' data count invalid   ',number_of_invalid_values

            
        # Scan phenological data set for valid data
        number_of_valid_values = np.empty([obs_pheno_data.variable.shape[1]], dtype=np.float32)
        mean = np.ma.mean(obs_pheno_data.variable)
        rounded_mean = round(mean,1) * 10
        
        #print 'mean', mean
        print 'rounded_mean', rounded_mean
   
        #if total_valid_data[0,int(rounded_mean)] != 0:
        #    print 'Doppel value !!!'
        #    rounded_mean = rounded_mean+1
       
        
        #print "Number of times rounded_mean appears : ", list.count(rounded_mean);
        
        #valid_stations[0,pheno_phase] = 0
        valid_stations = 0.0
        station = 0
        while station < obs_pheno_data.variable.shape[1]:
        
            # Determine number of valid values at each station over all available years
            mpf = ma.count(obs_pheno_data.variable[:,station])
            # Determine number of valid values at each station over over time period 2000 - 2014
            #mpf = ma.count(obs_pheno_data.variable[50:64,station])

            
            if mpf >= 1:
                number_of_valid_values[valid_stations] = mpf
                #valid_stations[0,int(pheno_phase)] = valid_stations[0,int(pheno_phase)] + 1
                #valid_stations[float(pheno_phase)] = valid_stations[float(pheno_phase)] + 1
                valid_stations[pheno_phase] = valid_stations[pheno_phase] + 1
                                
                #valid_stations = valid_stations + 1 
                 
                     
                #print>>log_file,'{} {} {} {:10.4f} {:10.4f} {} {} {} {}'.format(' check valid obs ', \
                #pheno_phase, station, valid_stations[0,pheno_phase], mpf
                
            station += 1     

        pheno_phase += 1






        
    print 'total_valid_data: ', total_valid_data [0,:]
        

    curve_plot(log_file=log_file, x_ax_length=0.7, y_ax_length=0.3, x_orig=0.1, y_orig=0.1, \
        x_left=0, x_right=3200, y_bottom=0., y_top=1000000, linlog_x='linear', linlog_y='linear', \
        curve_labels=['Number of observation'], nu_of_curves=1, nu_of_data=3200, plot_y=t_valid_station_name, \
        label_x='Phase', label_y='Total number of Stations', \
        header_text1='Total number of stations per phase 2000 - 2014', \
        output_file_name= HOME+'/ingrid_meran/pollenMON/scripts_python/total_number_stat_per_phase_2000_2014')

Benutzeravatar
pillmuncher
User
Beiträge: 1482
Registriert: Samstag 21. März 2009, 22:59
Wohnort: Pfaffenwinkel

@Pheno: Vegleich mal die Zeilen 16 und 37 und 84.
In specifications, Murphy's Law supersedes Ohm's.
Benutzeravatar
Hyperion
Moderator
Beiträge: 7478
Registriert: Freitag 4. August 2006, 14:56
Wohnort: Hamburg
Kontaktdaten:

Na, die Meldung ist doch klar (auch wenn ich das im Code so nicht sehe, weil die Nummer des Stacktrace ja nicht mehr zum gezeigten Snippet passt und Du auch netter Weise *keinen* Offset oder die passende Nummer für den Beitrag gibst):

Hinter ``valid_stations`` verbirgt sich ein Float-Objekt. Wenn Du aber auf dieses die``[]``-Syntax anwendest, sucht Python eben nach einer ``__getitem__``-Methode (s. die Doku dazu).

*Wieso* das so ist, musst Du nun selber herausfinden. Entweder gehst Du von vollkommen falschen Voraussetzungen aus und das kann noch nie geklappt haben, oder aber Du bindest unabsichtlich ein neues Objekt an den Namen, eben ein Float-Objekt.
encoding_kapiert = all(verstehen(lesen(info)) for info in (Leonidas Folien, Blog, Folien & Text inkl. Python3, utf-8 everywhere))
assert encoding_kapiert
Antworten