Fehlermeldung

Wenn du dir nicht sicher bist, in welchem der anderen Foren du die Frage stellen sollst, dann bist du hier im Forum für allgemeine Fragen sicher richtig.
Benutzeravatar
BlackJack
Moderator
Beiträge: 32994
Registriert: Dienstag 25. Januar 2005, 23:29
Wohnort: Berlin
Kontaktdaten:

Re: Fehlermeldung

Beitragvon BlackJack » Mittwoch 15. Februar 2017, 12:14

@Romaxx: Dieses Modul ist der falsche Ausgangspunkt. Du musst für `multiprocessing` den Import desjenigen Moduls ”effektfrei” importierbar bekommen, welches als *Programm ausgeführt* wird. Und welche Zeilen dort von Modulebene in eine Funktion gesteckt werden müssen die nur bei Programmausführung, aber nicht beim Import ausgeführt wird, habe ich schon mal gesagt. Im Grunde alles nach den Importen. Denn sonst wird das von jedem Prozess den `multiprocessing` parallel startet, wieder ausgeführt, denn `multiprocessing` importiert dieses Modul als erstes im neuen Prozess. Da darf dann nicht wieder die ganze Berechnung beginnen als wäre *das* das Hauptprogramm.
“Programs must be written for people to read, and only incidentally for machines to execute.” — Abelson & Sussman, SICP (preface to the first edition)
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Re: Fehlermeldung

Beitragvon Romaxx » Mittwoch 15. Februar 2017, 12:57

Danke für deine Geduld.

Nur fällt es mir immer noch schwer, die richtige Reihenfolge oder Vorgehensweise zu verstehen.

Ich glaube wenigstens jetzt zu verstehen, dass du damit eigentlich die gesamte class VSSGP_opt(): meinst.

Wie machen ich dass aber? Das folgende ist sicher nicht korrekt:

  1. def new_function():
  2.     class VSSGP_opt():
  3.             def __init__
  4.             ....


Kannst du mir grob sagen, wie die Hierarchie in Form von 'Pseudo-Code' aussehen sollte?

Ich will nicht, dass du mir meine Arbeit abnimmst, ich bitte dich lediglich um ein Minimalbeispiel.

Falls es dir hilft, ich führe

  1. res = minimize(vssgp_opt.func, x0, method='L-BFGS-B', jac=vssgp_opt.fprime,
  2.     options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)


aus und vssgp_model ist

  1. # To speed Theano up, create ram disk: mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk
  2. # Then use flag THEANO_FLAGS='base_compiledir=/mnt/ramdisk' python script.py
  3. import sys; sys.path.insert(0, "../Theano"); sys.path.insert(0, "../../Theano")
  4. import theano; import theano.tensor as T; import theano.sandbox.linalg as sT
  5. import numpy as np
  6. import cPickle
  7.  
  8. print('Theano version: ' + theano.__version__ + ', base compile dir: ' + theano.config.base_compiledir)
  9. theano.config.mode = 'FAST_RUN'
  10. theano.config.optimizer = 'fast_run'
  11. theano.config.reoptimize_unpickled_function = False
  12.  
  13. class VSSGP:
  14.     def __init__(self, use_exact_A = False):
  15.         try:
  16.             print('Trying to load model...')
  17.             with open('model_exact_A.save' if use_exact_A else 'model.save', 'rb') as file_handle:
  18.                 self.f, self.g = cPickle.load(file_handle)
  19.                 print('Loaded!')
  20.             return
  21.         except:
  22.             print('Failed. Creating a new model...')
  23.  
  24.         print('Setting up variables...')
  25.         Z, mu, lSigma = T.dtensor3s('Z', 'mu', 'lSigma')
  26.         X, Y, m, ls, lhyp, lalpha, lalpha_delta, a = T.dmatrices('X', 'Y', 'm', 'ls', 'lhyp', 'lalpha', 'lalpha_delta', 'a')
  27.         b = T.dvector('b')
  28.         ltau = T.dscalar('ltau')
  29.         Sigma, alpha, alpha_delta, tau = T.exp(lSigma), T.exp(lalpha), T.exp(lalpha_delta), T.exp(ltau)
  30.         alpha = alpha % 2*np.pi
  31.         beta = T.minimum(alpha + alpha_delta, 2*np.pi)
  32.         (N, Q), D, K = X.shape, Y.shape[1], mu.shape[1]
  33.         sf2s, lss, ps = T.exp(lhyp[0]), T.exp(lhyp[1:1+Q]), T.exp(lhyp[1+Q:]) # length-scales abd periods
  34.  
  35.         print('Setting up model...')
  36.             LL, KL, Y_pred_mean, Y_pred_var, EPhi, EPhiTPhi, opt_A_mean, opt_A_cov = self.get_model_exact_A(Y, X, Z, alpha, beta, mu, Sigma, m, ls, sf2s, lss, ps, tau, a, b, N, Q, D, K)
  37.  
  38.         print('Compiling model...')
  39.         inputs = {'X': X, 'Y': Y, 'Z': Z, 'mu': mu, 'lSigma': lSigma, 'm': m, 'ls': ls, 'lalpha': lalpha,
  40.             'lalpha_delta': lalpha_delta, 'lhyp': lhyp, 'ltau': ltau, 'a': a, 'b': b}
  41.         z = 0.0*sum([T.sum(v) for v in inputs.values()]) # solve a bug with derivative wrt inputs not in the graph
  42.         f = zip(['opt_A_mean', 'opt_A_cov', 'EPhi', 'EPhiTPhi', 'Y_pred_mean', 'Y_pred_var', 'LL', 'KL'],
  43.                 [opt_A_mean, opt_A_cov, EPhi, EPhiTPhi, Y_pred_mean, Y_pred_var, LL, KL])
  44.         self.f = {n: theano.function(inputs.values(), f+z, name=n, on_unused_input='ignore') for n,f in f}
  45.         g = zip(['LL', 'KL'], [LL, KL])
  46.         wrt = {'Z': Z, 'mu': mu, 'lSigma': lSigma, 'm': m, 'ls': ls, 'lalpha': lalpha,
  47.             'lalpha_delta': lalpha_delta, 'lhyp': lhyp, 'ltau': ltau, 'a': a, 'b': b}
  48.         self.g = {vn: {gn: theano.function(inputs.values(), T.grad(gv+z, vv), name='d'+gn+'_d'+vn,
  49.             on_unused_input='ignore') for gn,gv in g} for vn, vv in wrt.iteritems()}
  50.  
  51.         with open('model_exact_A.save' if use_exact_A else 'model.save', 'wb') as file_handle:
  52.             print('Saving model...')
  53.             sys.setrecursionlimit(2000)
  54.             cPickle.dump([self.f, self.g], file_handle, protocol=cPickle.HIGHEST_PROTOCOL)
  55.  
  56.     def get_EPhi(self, X, Z, alpha, beta, mu, Sigma, sf2s, lss, ps, K):
  57.         two_over_K = 2.*sf2s[None, None, :]/K # N x K x comp
  58.         mean_p, std_p = ps**-1, (2*np.pi*lss)**-1 # Q x comp
  59.         Ew = std_p[:, None, :] * mu + mean_p[:, None, :] # Q x K x comp
  60.         XBAR = 2 * np.pi * (X[:, :, None, None] - Z[None, :, :, :]) # N x Q x K x comp
  61.         decay = T.exp(-0.5 * ((std_p[None, :, None, :] * XBAR)**2 * Sigma[None, :, :, :]).sum(1)) # N x K x comp
  62.  
  63.         cos_w = T.cos(alpha + (XBAR * Ew[None, :, :, :]).sum(1)) # N x K x comp
  64.         EPhi = two_over_K**0.5 * decay * cos_w
  65.         EPhi = EPhi.flatten(2) # N x K*comp
  66.  
  67.         cos_2w = T.cos(2 * alpha + 2 * (XBAR * Ew[None, :, :, :]).sum(1)) # N x K x comp
  68.         E_cos_sq = two_over_K * (0.5 + 0.5*decay**4 * cos_2w) # N x K x comp
  69.         EPhiTPhi = (EPhi.T).dot(EPhi)
  70.         EPhiTPhi = EPhiTPhi - T.diag(T.diag(EPhiTPhi)) + T.diag(E_cos_sq.sum(0).flatten(1))
  71.         return EPhi, EPhiTPhi, E_cos_sq
  72.  
  73.     def get_opt_A(self, tau, EPhiTPhi, YT_EPhi):
  74.         SigInv = EPhiTPhi + (tau**-1 + 1e-4) * T.identity_like(EPhiTPhi)
  75.         cholTauSigInv = tau**0.5 * sT.cholesky(SigInv)
  76.         invCholTauSigInv = sT.matrix_inverse(cholTauSigInv)
  77.         tauInvSig = invCholTauSigInv.T.dot(invCholTauSigInv)
  78.         Sig_EPhiT_Y = tau * tauInvSig.dot(YT_EPhi.T)
  79.         return Sig_EPhiT_Y, tauInvSig, cholTauSigInv
  80.  
  81.     def get_model_exact_A(self, Y, X, Z, alpha, beta, mu, Sigma, m, ls, sf2s, lss, ps, tau, a, b, N, Q, D, K):
  82.         Y = Y - (X.dot(a) + b[None,:])
  83.         EPhi, EPhiTPhi, E_cos_sq = self.get_EPhi(X, Z, alpha, beta, mu, Sigma, sf2s, lss, ps, K)
  84.         YT_EPhi = Y.T.dot(EPhi)
  85.  
  86.         opt_A_mean, opt_A_cov, cholSigInv = self.get_opt_A(tau, EPhiTPhi, YT_EPhi)
  87.         LL = (-0.5*N*D * np.log(2 * np.pi) + 0.5*N*D * T.log(tau) - 0.5*tau*T.sum(Y**2)
  88.                - 0.5*D * T.sum(2*T.log(T.diag(cholSigInv)))
  89.                + 0.5*tau * T.sum(opt_A_mean.T * YT_EPhi))
  90.  
  91.         KL_w = 0.5 * (Sigma + mu**2 - T.log(Sigma) - 1).sum()
  92.  
  93.         ''' For prediction, m is assumed to be [m_1, ..., m_d] with m_i = opt_a_i, and and ls = opt_A_cov  '''
  94.         Y_pred_mean = EPhi.dot(m) + (X.dot(a) + b[None,:])
  95.         EphiTphi = EPhi[:, :, None] * EPhi[:, None, :] # N x K*comp x K*comp
  96.         comp = sf2s.shape[0]
  97.         EphiTphi = EphiTphi - T.eye(K*comp)[None, :, :] * EphiTphi + T.eye(K*comp)[None, :, :] * E_cos_sq.flatten(2)[:, :, None]
  98.         Psi = T.sum(T.sum(EphiTphi * ls[None, :, :], 2), 1) # N
  99.         flat_diag_n = E_cos_sq.flatten(2) - EPhi**2 # N x K*comp
  100.         Y_pred_var = tau**-1 * T.eye(D) + np.transpose(m.T.dot(flat_diag_n[:, :, None] * m),(1,0,2)) \
  101.                      + T.eye(D)[None, :, :] * Psi[:, None, None]
  102.  
  103.         return LL, KL_w, Y_pred_mean, Y_pred_var, EPhi, EPhiTPhi, opt_A_mean, opt_A_cov
Benutzeravatar
BlackJack
Moderator
Beiträge: 32994
Registriert: Dienstag 25. Januar 2005, 23:29
Wohnort: Berlin
Kontaktdaten:

Re: Fehlermeldung

Beitragvon BlackJack » Mittwoch 15. Februar 2017, 13:05

@Romaxx: Nochmal: Das ist das falsche Modul! Das Modul das Du *als Programm ausführst* ist in erster Linie betroffen. Und auch nur *dort* macht ein Vergleich von `__name__` mit dem Wert '__main__' überhaupt Sinn, denn in allen anderen Modulen kann diese Bedingungen ja *nie* erfüllt sein.
“Programs must be written for people to read, and only incidentally for machines to execute.” — Abelson & Sussman, SICP (preface to the first edition)
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Re: Fehlermeldung

Beitragvon Romaxx » Mittwoch 15. Februar 2017, 13:16

Hm, ich habe jetzt im Prinzip alle Module geposted. Das eine ist die VSSGP_opt, also

  1.         import numpy as np
  2.         from vssgp_model import VSSGP
  3.         import pylab
  4.         import multiprocessing
  5.         def extend(x, y, z = {}):
  6.             return dict(x.items() + y.items() + z.items())
  7.         def eval_f_LL(X, Y, params):
  8.             out_f = VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
  9.             return out_f
  10.         def eval_g_LL(name, X, Y, params):
  11.             out_g = VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
  12.             return out_g
  13.          
  14.         class VSSGP_opt():
  15.             def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
  16.                          parallel = False, batch_size = None, components = None, print_interval = None):
  17.                 self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
  18.                 self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
  19.                 self.inputs, self.test_set = inputs, test_set
  20.                 self.print_interval = 10 if print_interval is None else print_interval
  21.                 self.opt_param_names = [n for n,_ in opt_params.iteritems()]
  22.                 opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
  23.                 self.shapes = [v.shape for v in opt_param_values]
  24.                 self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
  25.                 self.components = opt_params['lSigma'].shape[2] if components is None else components
  26.                 self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
  27.                 self.callback_counter = [0]
  28.                 if batch_size is not None:
  29.                     if parallel:
  30.                         self.pool = multiprocessing.Pool(int(self.N / self.batch_size))
  31.                     else:
  32.                         self.params = np.concatenate([v.flatten() for v in opt_param_values])
  33.                         self.param_updates = np.zeros_like(self.params)
  34.                         self.moving_mean_squared = np.zeros_like(self.params)
  35.                         self.learning_rates = 1e-2*np.ones_like(self.params)
  36.          
  37.          
  38.             def unpack(self, x):
  39.                 x_param_values = [x[self.sizes[i-1]:self.sizes[i]].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
  40.                 params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
  41.                 if 'ltau' in params:
  42.                     params['ltau'] = params['ltau'].squeeze()
  43.                 return params
  44.          
  45.             def func(self, x):
  46.                 params = extend(self.fixed_params, self.unpack(x))
  47.                 if self.batch_size is not None:
  48.                     X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
  49.                     if self.parallel:
  50.                         arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
  51.                         LL = sum(self.pool.map_async(eval_f_LL, arguments).get(9999999))
  52.                         KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
  53.                     else:
  54.                         split = np.random.randint(splits)
  55.                         LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
  56.                         print(LL)
  57.                         KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
  58.                 else:
  59.                     params = extend(self.inputs, params)
  60.                     LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
  61.                 return -(LL - KL)
  62.      
  63.             def fprime(self, x):
  64.                 grads, params = [], extend(self.fixed_params, self.unpack(x))
  65.                 for n in self.opt_param_names:
  66.                     if self.batch_size is not None:
  67.                         X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
  68.                         if self.parallel:
  69.                             arguments = [(n, X[i::splits], Y[i::splits], params) for i in xrange(splits)]
  70.                             dLL = sum(self.pool.map_async(eval_g_LL, arguments).get(9999999))
  71.                             dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
  72.                         else:
  73.                             split = np.random.randint(splits)
  74.                             dLL = self.N / self.batch_size * self.vssgp.g[n]['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
  75.                             dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
  76.                     else:
  77.                         params = extend(self.inputs, params)
  78.                         dLL, dKL = self.vssgp.g[n]['LL'](**params), self.vssgp.g[n]['KL'](**params)
  79.                     grads += [-(dLL - dKL)]
  80.                 return np.concatenate([grad.flatten() for grad in grads])
  81.          
  82.             def callback(self, x):
  83.                 if self.callback_counter[0]%self.print_interval == 0:
  84.                     opt_params = self.unpack(x)
  85.                     params = extend(self.inputs, self.fixed_params, opt_params)
  86.                     LL = self.vssgp.f['LL'](**params)
  87.                     KL = self.vssgp.f['KL'](**params)
  88.                     print(LL - KL)
  89.                 self.callback_counter[0] += 1


vielleicht etwas verwirrend, weil dort die class genauso heisst. Und dann eben vssgp_model:


  1.     # To speed Theano up, create ram disk: mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk
  2.     # Then use flag THEANO_FLAGS='base_compiledir=/mnt/ramdisk' python script.py
  3.     import sys; sys.path.insert(0, "../Theano"); sys.path.insert(0, "../../Theano")
  4.     import theano; import theano.tensor as T; import theano.sandbox.linalg as sT
  5.     import numpy as np
  6.     import cPickle
  7.      
  8.     print('Theano version: ' + theano.__version__ + ', base compile dir: ' + theano.config.base_compiledir)
  9.     theano.config.mode = 'FAST_RUN'
  10.     theano.config.optimizer = 'fast_run'
  11.     theano.config.reoptimize_unpickled_function = False
  12.      
  13.     class VSSGP:
  14.         def __init__(self, use_exact_A = False):
  15.             try:
  16.                 print('Trying to load model...')
  17.                 with open('model_exact_A.save' if use_exact_A else 'model.save', 'rb') as file_handle:
  18.                     self.f, self.g = cPickle.load(file_handle)
  19.                     print('Loaded!')
  20.                 return
  21.             except:
  22.                 print('Failed. Creating a new model...')
  23.      
  24.             print('Setting up variables...')
  25.             Z, mu, lSigma = T.dtensor3s('Z', 'mu', 'lSigma')
  26.             X, Y, m, ls, lhyp, lalpha, lalpha_delta, a = T.dmatrices('X', 'Y', 'm', 'ls', 'lhyp', 'lalpha', 'lalpha_delta', 'a')
  27.             b = T.dvector('b')
  28.             ltau = T.dscalar('ltau')
  29.             Sigma, alpha, alpha_delta, tau = T.exp(lSigma), T.exp(lalpha), T.exp(lalpha_delta), T.exp(ltau)
  30.             alpha = alpha % 2*np.pi
  31.             beta = T.minimum(alpha + alpha_delta, 2*np.pi)
  32.             (N, Q), D, K = X.shape, Y.shape[1], mu.shape[1]
  33.             sf2s, lss, ps = T.exp(lhyp[0]), T.exp(lhyp[1:1+Q]), T.exp(lhyp[1+Q:]) # length-scales abd periods
  34.      
  35.             print('Setting up model...')
  36.                 LL, KL, Y_pred_mean, Y_pred_var, EPhi, EPhiTPhi, opt_A_mean, opt_A_cov = self.get_model_exact_A(Y, X, Z, alpha, beta, mu, Sigma, m, ls, sf2s, lss, ps, tau, a, b, N, Q, D, K)
  37.      
  38.             print('Compiling model...')
  39.             inputs = {'X': X, 'Y': Y, 'Z': Z, 'mu': mu, 'lSigma': lSigma, 'm': m, 'ls': ls, 'lalpha': lalpha,
  40.                 'lalpha_delta': lalpha_delta, 'lhyp': lhyp, 'ltau': ltau, 'a': a, 'b': b}
  41.             z = 0.0*sum([T.sum(v) for v in inputs.values()]) # solve a bug with derivative wrt inputs not in the graph
  42.             f = zip(['opt_A_mean', 'opt_A_cov', 'EPhi', 'EPhiTPhi', 'Y_pred_mean', 'Y_pred_var', 'LL', 'KL'],
  43.                     [opt_A_mean, opt_A_cov, EPhi, EPhiTPhi, Y_pred_mean, Y_pred_var, LL, KL])
  44.             self.f = {n: theano.function(inputs.values(), f+z, name=n, on_unused_input='ignore') for n,f in f}
  45.             g = zip(['LL', 'KL'], [LL, KL])
  46.             wrt = {'Z': Z, 'mu': mu, 'lSigma': lSigma, 'm': m, 'ls': ls, 'lalpha': lalpha,
  47.                 'lalpha_delta': lalpha_delta, 'lhyp': lhyp, 'ltau': ltau, 'a': a, 'b': b}
  48.             self.g = {vn: {gn: theano.function(inputs.values(), T.grad(gv+z, vv), name='d'+gn+'_d'+vn,
  49.                 on_unused_input='ignore') for gn,gv in g} for vn, vv in wrt.iteritems()}
  50.      
  51.             with open('model_exact_A.save' if use_exact_A else 'model.save', 'wb') as file_handle:
  52.                 print('Saving model...')
  53.                 sys.setrecursionlimit(2000)
  54.                 cPickle.dump([self.f, self.g], file_handle, protocol=cPickle.HIGHEST_PROTOCOL)
  55.      
  56.         def get_EPhi(self, X, Z, alpha, beta, mu, Sigma, sf2s, lss, ps, K):
  57.             two_over_K = 2.*sf2s[None, None, :]/K # N x K x comp
  58.             mean_p, std_p = ps**-1, (2*np.pi*lss)**-1 # Q x comp
  59.             Ew = std_p[:, None, :] * mu + mean_p[:, None, :] # Q x K x comp
  60.             XBAR = 2 * np.pi * (X[:, :, None, None] - Z[None, :, :, :]) # N x Q x K x comp
  61.             decay = T.exp(-0.5 * ((std_p[None, :, None, :] * XBAR)**2 * Sigma[None, :, :, :]).sum(1)) # N x K x comp
  62.      
  63.             cos_w = T.cos(alpha + (XBAR * Ew[None, :, :, :]).sum(1)) # N x K x comp
  64.             EPhi = two_over_K**0.5 * decay * cos_w
  65.             EPhi = EPhi.flatten(2) # N x K*comp
  66.      
  67.             cos_2w = T.cos(2 * alpha + 2 * (XBAR * Ew[None, :, :, :]).sum(1)) # N x K x comp
  68.             E_cos_sq = two_over_K * (0.5 + 0.5*decay**4 * cos_2w) # N x K x comp
  69.             EPhiTPhi = (EPhi.T).dot(EPhi)
  70.             EPhiTPhi = EPhiTPhi - T.diag(T.diag(EPhiTPhi)) + T.diag(E_cos_sq.sum(0).flatten(1))
  71.             return EPhi, EPhiTPhi, E_cos_sq
  72.      
  73.         def get_opt_A(self, tau, EPhiTPhi, YT_EPhi):
  74.             SigInv = EPhiTPhi + (tau**-1 + 1e-4) * T.identity_like(EPhiTPhi)
  75.             cholTauSigInv = tau**0.5 * sT.cholesky(SigInv)
  76.             invCholTauSigInv = sT.matrix_inverse(cholTauSigInv)
  77.             tauInvSig = invCholTauSigInv.T.dot(invCholTauSigInv)
  78.             Sig_EPhiT_Y = tau * tauInvSig.dot(YT_EPhi.T)
  79.             return Sig_EPhiT_Y, tauInvSig, cholTauSigInv
  80.      
  81.         def get_model_exact_A(self, Y, X, Z, alpha, beta, mu, Sigma, m, ls, sf2s, lss, ps, tau, a, b, N, Q, D, K):
  82.             Y = Y - (X.dot(a) + b[None,:])
  83.             EPhi, EPhiTPhi, E_cos_sq = self.get_EPhi(X, Z, alpha, beta, mu, Sigma, sf2s, lss, ps, K)
  84.             YT_EPhi = Y.T.dot(EPhi)
  85.      
  86.             opt_A_mean, opt_A_cov, cholSigInv = self.get_opt_A(tau, EPhiTPhi, YT_EPhi)
  87.             LL = (-0.5*N*D * np.log(2 * np.pi) + 0.5*N*D * T.log(tau) - 0.5*tau*T.sum(Y**2)
  88.                    - 0.5*D * T.sum(2*T.log(T.diag(cholSigInv)))
  89.                    + 0.5*tau * T.sum(opt_A_mean.T * YT_EPhi))
  90.      
  91.             KL_w = 0.5 * (Sigma + mu**2 - T.log(Sigma) - 1).sum()
  92.      
  93.             ''' For prediction, m is assumed to be [m_1, ..., m_d] with m_i = opt_a_i, and and ls = opt_A_cov  '''
  94.             Y_pred_mean = EPhi.dot(m) + (X.dot(a) + b[None,:])
  95.             EphiTphi = EPhi[:, :, None] * EPhi[:, None, :] # N x K*comp x K*comp
  96.             comp = sf2s.shape[0]
  97.             EphiTphi = EphiTphi - T.eye(K*comp)[None, :, :] * EphiTphi + T.eye(K*comp)[None, :, :] * E_cos_sq.flatten(2)[:, :, None]
  98.             Psi = T.sum(T.sum(EphiTphi * ls[None, :, :], 2), 1) # N
  99.             flat_diag_n = E_cos_sq.flatten(2) - EPhi**2 # N x K*comp
  100.             Y_pred_var = tau**-1 * T.eye(D) + np.transpose(m.T.dot(flat_diag_n[:, :, None] * m),(1,0,2)) \
  101.                          + T.eye(D)[None, :, :] * Psi[:, None, None]
  102.      
  103.             return LL, KL_w, Y_pred_mean, Y_pred_var, EPhi, EPhiTPhi, opt_A_mean, opt_A_cov


Alles wird durch

  1.     from vssgp_opt import VSSGP_opt
  2. from scipy.optimize import minimize
  3. import numpy as np
  4. from numpy.random import randn, rand
  5. np.set_printoptions(precision=2, suppress=True)
  6. import pylab; pylab.ion() # turn interactive mode on
  7.  
  8. N, Q, D, K = 1000, 1, 1, 50
  9. components, init_period, init_lengthscales, sf2s, tau = 2, 1e32, 1, np.array([1, 5]), 1
  10.  
  11. # Some synthetic data to play with
  12. X = rand(N,Q) * 5*np.pi
  13. X = np.sort(X, axis=0)
  14. Z = rand(Q,K,components) * 5*np.pi
  15. #a, b, c, d, e, f = randn(), randn(), randn(), randn(), randn(), randn()
  16. #a, b, c, d, e, f = 0.6, 0.7, -0.6, 0.5, -0.1, -0.8
  17. #a, b, c, d, e, f = -0.6, -0.3, -0.6, 0.6, 0.7, 0.6
  18. #a, b, c, d, e, f = -0.5, -0.3, -0.6, 0.1, 1.1, 0.1
  19. a, b, c, d, e, f = 0.6, -1.8, -0.5, -0.5, 1.7, 0
  20. Y = a*np.sin(b*X+c) + d*np.sin(e*X+f)
  21.  
  22. # Initialise near the posterior:
  23. mu = randn(Q,K,components)
  24. # TODO: Currently tuned by hand to smallest value that doesn't diverge; we break symmetry to allow for some to get very small while others very large
  25. feature_lengthscale = 5 # features are non-diminishing up to feature_lengthscale / lengthscale from z / lengthscale
  26. lSigma = np.log(randn(Q,K,components)**2 / feature_lengthscale**2) # feature weights are np.exp(-0.5 * (x-z)**2 * Sigma / lengthscale**2)
  27. lalpha = np.log(rand(K,components)*2*np.pi)
  28. lalpha_delta = np.log(rand(K,components) * (2*np.pi - lalpha))
  29. m = randn(components*K,D)
  30. ls = np.zeros((components*K,D)) - 4
  31. lhyp = np.log(1 + 1e-2*randn(2*Q+1, components)) # break symmetry
  32. lhyp[0,:] += np.log(sf2s) # sf2
  33. lhyp[1:Q+1,:] += np.log(init_lengthscales) # length-scales
  34. lhyp[Q+1:,:] += np.log(init_period) # period
  35. ltau = np.log(tau) # precision
  36. lstsq = np.linalg.lstsq(np.hstack([X, np.ones((N,1))]), Y)[0]
  37. a = 0*np.atleast_2d(lstsq[0]) # mean function slope
  38. b = 0*lstsq[1] # mean function intercept
  39.  
  40. opt_params = {'Z': Z, 'm': m, 'ls': ls, 'mu': mu, 'lSigma': lSigma, 'lhyp': lhyp, 'ltau': ltau}
  41. fixed_params = {'lalpha': lalpha, 'lalpha_delta': lalpha_delta, 'a': a, 'b': b}
  42. inputs = {'X': X, 'Y': Y}
  43. vssgp_opt = VSSGP_opt(N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A=True, parallel = True, batch_size = 25, print_interval=1)
  44.  
  45. # LBFGS
  46. x0 = np.concatenate([np.atleast_2d(opt_params[n]).flatten() for n in vssgp_opt.opt_param_names])
  47. vssgp_opt.callback(x0)
  48.  
  49.  
  50. res = minimize(vssgp_opt.func, x0, method='L-BFGS-B', jac=vssgp_opt.fprime,
  51.     options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)
  52.  


initialisiert.

Ich habe hier nur nicht die von Theano compilierten Funktion eingefügt, die in vssgp_model compiliert/geladen werde.

Welches Modul ist es denn konkret? Ich verstehe es leider nicht.
Benutzeravatar
BlackJack
Moderator
Beiträge: 32994
Registriert: Dienstag 25. Januar 2005, 23:29
Wohnort: Berlin
Kontaktdaten:

Re: Fehlermeldung

Beitragvon BlackJack » Mittwoch 15. Februar 2017, 14:38

Das Modul das als Programm ausgeführt wird. Zum x. Mal. Du führst doch nur eines davon als Programm aus. Das ist das Modul das als Programm ausgeführt wird. Und das muss so umgeändert werden das man es *importieren* kann, *ohne* das *dabei* die ganze Berechnung ausgeführt wird. Das darf nur passieren wenn man es als Programm ausführt. Also: ``python modulname.py`` → tolle Berechnung wird ausgeführt, aber in Python ``import modulname`` → tolle Berechung wird *nicht* ausgeführt. Und die beiden Szenarien kann man an `__name__` in dem Modul erkennen und entsprechend halt auch unterscheiden.
  1. $ cat modul.py
  2. print __name__
  3.  
  4. if __name__ == '__main__':
  5.     print 'Hallo'
  6. $ python modul.py
  7. __main__
  8. Hallo
  9. $ python
  10. Python 2.7.12 (default, Nov 19 2016, 06:48:10)
  11. [GCC 5.4.0 20160609] on linux2
  12. Type "help", "copyright", "credits" or "license" for more information.
  13. >>> import modul
  14. modul
  15. >>>
“Programs must be written for people to read, and only incidentally for machines to execute.” — Abelson & Sussman, SICP (preface to the first edition)
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Re: Fehlermeldung

Beitragvon Romaxx » Mittwoch 15. Februar 2017, 15:14

Du hast doch 'Im Grunde alles nach den Importen.' einige Posts zuvor geschrieben.
Im Post davor habe ich dir VSSGP_opt zitiert, also gehe ich davon aus, dass du dieses Modul meinst, aber eben alles ab Zeile 11,12 (eingeschlossen), wenn ich es hier noch einmal poste:

  1.             import numpy as np
  2.             from vssgp_model import VSSGP
  3.             import pylab
  4.             import multiprocessing
  5.             def extend(x, y, z = {}):
  6.                 return dict(x.items() + y.items() + z.items())
  7.             def eval_f_LL(X, Y, params):
  8.                 out_f = VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
  9.                 return out_f
  10.             def eval_g_LL(name, X, Y, params):
  11.                 out_g = VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
  12.                 return out_g
  13.              
  14.             class VSSGP_opt():
  15.                 def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
  16.                              parallel = False, batch_size = None, components = None, print_interval = None):
  17.                     self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
  18.                     self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
  19.                     self.inputs, self.test_set = inputs, test_set
  20.                     self.print_interval = 10 if print_interval is None else print_interval
  21.                     self.opt_param_names = [n for n,_ in opt_params.iteritems()]
  22.                     opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
  23.                     self.shapes = [v.shape for v in opt_param_values]
  24.                     self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
  25.                     self.components = opt_params['lSigma'].shape[2] if components is None else components
  26.                     self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
  27.                     self.callback_counter = [0]
  28.                     if batch_size is not None:
  29.                         if parallel:
  30.                             self.pool = multiprocessing.Pool(int(self.N / self.batch_size))
  31.                         else:
  32.                             self.params = np.concatenate([v.flatten() for v in opt_param_values])
  33.                             self.param_updates = np.zeros_like(self.params)
  34.                             self.moving_mean_squared = np.zeros_like(self.params)
  35.                             self.learning_rates = 1e-2*np.ones_like(self.params)
  36.              
  37.              
  38.                 def unpack(self, x):
  39.                     x_param_values = [x[self.sizes[i-1]:self.sizes[i]].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
  40.                     params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
  41.                     if 'ltau' in params:
  42.                         params['ltau'] = params['ltau'].squeeze()
  43.                     return params
  44.              
  45.                 def func(self, x):
  46.                     params = extend(self.fixed_params, self.unpack(x))
  47.                     if self.batch_size is not None:
  48.                         X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
  49.                         if self.parallel:
  50.                             arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
  51.                             LL = sum(self.pool.map_async(eval_f_LL, arguments).get(9999999))
  52.                             KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
  53.                         else:
  54.                             split = np.random.randint(splits)
  55.                             LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
  56.                             print(LL)
  57.                             KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
  58.                     else:
  59.                         params = extend(self.inputs, params)
  60.                         LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
  61.                     return -(LL - KL)
  62.          
  63.                 def fprime(self, x):
  64.                     grads, params = [], extend(self.fixed_params, self.unpack(x))
  65.                     for n in self.opt_param_names:
  66.                         if self.batch_size is not None:
  67.                             X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
  68.                             if self.parallel:
  69.                                 arguments = [(n, X[i::splits], Y[i::splits], params) for i in xrange(splits)]
  70.                                 dLL = sum(self.pool.map_async(eval_g_LL, arguments).get(9999999))
  71.                                 dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
  72.                             else:
  73.                                 split = np.random.randint(splits)
  74.                                 dLL = self.N / self.batch_size * self.vssgp.g[n]['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
  75.                                 dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
  76.                         else:
  77.                             params = extend(self.inputs, params)
  78.                             dLL, dKL = self.vssgp.g[n]['LL'](**params), self.vssgp.g[n]['KL'](**params)
  79.                         grads += [-(dLL - dKL)]
  80.                     return np.concatenate([grad.flatten() for grad in grads])
  81.              
  82.                 def callback(self, x):
  83.                     if self.callback_counter[0]%self.print_interval == 0:
  84.                         opt_params = self.unpack(x)
  85.                         params = extend(self.inputs, self.fixed_params, opt_params)
  86.                         LL = self.vssgp.f['LL'](**params)
  87.                         KL = self.vssgp.f['KL'](**params)
  88.                         print(LL - KL)
  89.                     self.callback_counter[0] += 1


Und den Post danach schreibst du ' Nochmal: Das ist das falsche Modul!', nachdem ich dir eine vermeintlich trivialen Versuch einer Lösung liefere. Sorry aber ich verstehe es einfach nicht. DU könntest mir sehr helfen, wenn du einfach konkret wirst, indem du mir im Code zeigst, was du als das Modul meinst, weil anscheinend ist es ja schon da, da du schreibst 'Du führst doch nur eines davon als Programm aus'.
Nicht falsch verstehen, ich möchte wirklich etwas lernen...
Benutzeravatar
Kebap
User
Beiträge: 340
Registriert: Dienstag 15. November 2011, 14:20
Wohnort: Dortmund

Re: Fehlermeldung

Beitragvon Kebap » Mittwoch 15. Februar 2017, 15:17

Üblicherweise erstellst du ein zweites Programm, in dem du dann dieses importieren würdest.. Hast du sowas etwa nicht?
MorgenGrauen: 1 Welt, >12 Gilden, >85 Abenteuer, >1000 Waffen und Rüstungen,
>2500 NPC, >16000 Räume, >170 freiwillige Programmierer, einfach Text, seit 1992.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Re: Fehlermeldung

Beitragvon Romaxx » Mittwoch 15. Februar 2017, 15:29

In Post 'Mittwoch 15. Februar 2017, 13:16' habe ich alles zitiert, was ich habe, bis auf die Theano-Funktionen, die compiliert werden bzw. geladen werden.
Auch in Post 'Sonntag 12. Februar 2017, 17:43' habe ich auf die Demo verlinkt.
Benutzeravatar
BlackJack
Moderator
Beiträge: 32994
Registriert: Dienstag 25. Januar 2005, 23:29
Wohnort: Berlin
Kontaktdaten:

Re: Fehlermeldung

Beitragvon BlackJack » Mittwoch 15. Februar 2017, 16:40

@Romaxx: Ja, das Modul das Du als Programm ausführst ist schon da. Sonst hättest Du die Fehlermeldungen ja nicht zeigen können die beim Ausführen aufgetreten sind. Und wenn Du jetzt fragst welches Modul das ist, welches Du ausgeführt hast… Ähm, diese Frage kann man gar nicht ernsthaft stellen.
“Programs must be written for people to read, and only incidentally for machines to execute.” — Abelson & Sussman, SICP (preface to the first edition)
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Re: Fehlermeldung

Beitragvon Romaxx » Mittwoch 15. Februar 2017, 16:53

Ich weiss nicht, wie man als Helfer einfach konsequent fast durchgehend meine postings nicht für Ernst nehmen kann...
Das ist meine Antwort dazu.

Danke für deine Hilfe bis hierher.

Vielleicht gibt jemanden, der mir einfach konkret sagen kann, um welches Modul es sich handelt. Danke.
Benutzeravatar
BlackJack
Moderator
Beiträge: 32994
Registriert: Dienstag 25. Januar 2005, 23:29
Wohnort: Berlin
Kontaktdaten:

Re: Fehlermeldung

Beitragvon BlackJack » Mittwoch 15. Februar 2017, 17:16

@Romaxx: Ich verstehe Deine Frage nach dem Modul nicht, nachdem ich mehrfach dazu Antworten geschrieben habe. Und wenn Du ein Modul ausführst und danach fragst welches Modul ich meine wenn ich sage das Modul das man als Programm ausführt, dann kann das meiner Meinung nicht sein das jemand diese Frage stellt. Das geht einfach nicht. Du hast das Modul ausgeführt, mehrfach, es ist in der README beschrieben welches Modul man ausführen muss, das hast Du getan, und Du fragst ernsthaft welches Modul ich meine? Und kommst immer wieder mit dem gleichen falschen Modul angedackelt, obwohl ich mehrfach gesagt habe das ist es nicht, sondern das welches als Programm ausgeführt wurde. Selbst wenn Du nicht wissen solltest welches das war, was gar nicht sein kann, kannst Du doch nicht immer wieder mit dem falschen Modul ankommen. Du verarschst mich hier doch, wie soll ich das noch ernst nehmen?
“Programs must be written for people to read, and only incidentally for machines to execute.” — Abelson & Sussman, SICP (preface to the first edition)
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Re: Fehlermeldung

Beitragvon Romaxx » Mittwoch 15. Februar 2017, 21:24

Ja klar, und ich hab den ganzen Tag nichts besseres zu tun, als in Foren hilfsbereite Menschen zu verarschen.
Du hast mir ja bisher auch noch keine andere Antwort geliefert, als 'das Modul, welches als Programm ausgeführt wird'. Sorry, aber wie kannst du dann erwarten, dass ich weiter komme mit dieser Sache hier. Außerdem habe ich dir mehrfach auch andere mögliche Ideen genannt, aber leider ist raten und dann ins Schwarze treffen nicht immer von Erfolg gekrönt.
Dein Weltbild in allen Ehren...
Ich belasse das jetzt hiermit.
Benutzeravatar
BlackJack
Moderator
Beiträge: 32994
Registriert: Dienstag 25. Januar 2005, 23:29
Wohnort: Berlin
Kontaktdaten:

Re: Fehlermeldung

Beitragvon BlackJack » Mittwoch 15. Februar 2017, 21:32

@Romaxx: Wo hast Du was anderes geraten? Und falls Du das hättest: Bei drei Modulen schaffst Du durch mehrfaches raten nicht das richtige zu finden? Und ich habe Dir das Modul genannt. Und welche Zeilen betroffen sind. Wenn Leseverständnis nicht Dein Ding ist, dann ist das hier das falsche Forum. Andererseits kannst Du eigentlich nur ein Troll sein. Du hast das Programm gestartet und kannst nicht sagen welches Modul Du da als Programm gestartet hast — diese Aussage ist schlicht nicht realistisch.
“Programs must be written for people to read, and only incidentally for machines to execute.” — Abelson & Sussman, SICP (preface to the first edition)
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Re: Fehlermeldung

Beitragvon Romaxx » Freitag 17. Februar 2017, 12:03

Da bin ich wieder.
Ich habe nun vssgp_example zum Teil in eine Funktion gepackt:

  1. from vssgp_opt import VSSGP_opt
  2. from scipy.optimize import minimize
  3. import numpy as np
  4. from numpy.random import randn, rand
  5. np.set_printoptions(precision=2, suppress=True)
  6. import pylab; pylab.ion() # turn interactive mode on
  7.  
  8. def STARTME(N = 1000, Q=1, D=1, K=50, components=2, init_period=1e32, init_lengthscales=1, sf2s=np.array([1, 5]), tau=1):
  9.     # Some synthetic data to play with
  10.     X = rand(N,Q) * 5*np.pi
  11.     X = np.sort(X, axis=0)
  12.     Z = rand(Q,K,components) * 5*np.pi
  13.     #a, b, c, d, e, f = randn(), randn(), randn(), randn(), randn(), randn()
  14.     #a, b, c, d, e, f = 0.6, 0.7, -0.6, 0.5, -0.1, -0.8
  15.     #a, b, c, d, e, f = -0.6, -0.3, -0.6, 0.6, 0.7, 0.6
  16.     #a, b, c, d, e, f = -0.5, -0.3, -0.6, 0.1, 1.1, 0.1
  17.     a, b, c, d, e, f = 0.6, -1.8, -0.5, -0.5, 1.7, 0
  18.     Y = a*np.sin(b*X+c) + d*np.sin(e*X+f)
  19.    
  20.     # Initialise near the posterior:
  21.     mu = randn(Q,K,components)
  22.     # TODO: Currently tuned by hand to smallest value that doesn't diverge; we break symmetry to allow for some to get very small while others very large
  23.     feature_lengthscale = 5 # features are non-diminishing up to feature_lengthscale / lengthscale from z / lengthscale
  24.     lSigma = np.log(randn(Q,K,components)**2 / feature_lengthscale**2) # feature weights are np.exp(-0.5 * (x-z)**2 * Sigma / lengthscale**2)
  25.     lalpha = np.log(rand(K,components)*2*np.pi)
  26.     lalpha_delta = np.log(rand(K,components) * (2*np.pi - lalpha))
  27.     m = randn(components*K,D)
  28.     ls = np.zeros((components*K,D)) - 4
  29.     lhyp = np.log(1 + 1e-2*randn(2*Q+1, components)) # break symmetry
  30.     lhyp[0,:] += np.log(sf2s) # sf2
  31.     lhyp[1:Q+1,:] += np.log(init_lengthscales) # length-scales
  32.     lhyp[Q+1:,:] += np.log(init_period) # period
  33.     ltau = np.log(tau) # precision
  34.     lstsq = np.linalg.lstsq(np.hstack([X, np.ones((N,1))]), Y)[0]
  35.     a = 0*np.atleast_2d(lstsq[0]) # mean function slope
  36.     b = 0*lstsq[1] # mean function intercept
  37.    
  38.     opt_params = {'Z': Z, 'm': m, 'ls': ls, 'mu': mu, 'lSigma': lSigma, 'lhyp': lhyp, 'ltau': ltau}
  39.     fixed_params = {'lalpha': lalpha, 'lalpha_delta': lalpha_delta, 'a': a, 'b': b}
  40.     inputs = {'X': X, 'Y': Y}
  41.     vssgp_opt = VSSGP_opt(N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A=True, parallel = True, batch_size = 25, print_interval=1)
  42.    
  43.     # LBFGS
  44.     x0 = np.concatenate([np.atleast_2d(opt_params[n]).flatten() for n in vssgp_opt.opt_param_names])
  45.     pylab.figure(num=None, figsize=(12, 9), dpi=80, facecolor='w', edgecolor='w')
  46.     vssgp_opt.callback(x0)
  47.     res = minimize(vssgp_opt.func, x0, method='L-BFGS-B', jac=vssgp_opt.fprime,
  48.         options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)
  49.    
  50.     raw_input("PRESS ENTER TO CONTINUE.")
  51.    
  52.     return (res)


Und rufe danach

  1. if __name__== '__main__' : STARTME()


auf.
Ich denke, das sollte nun so passen.

Das vssgp_opt modul habe ich nun zu

  1. import numpy as np
  2. from vssgp_model import VSSGP
  3. import multiprocessing
  4.  
  5. class VSSGP_opt():
  6.     def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
  7.                  parallel = False, batch_size = None, components = None, print_interval = None):
  8.         self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
  9.         self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
  10.         self.inputs, self.test_set = inputs, test_set
  11.         self.print_interval = 10 if print_interval is None else print_interval
  12.         self.opt_param_names = [n for n,_ in opt_params.iteritems()]
  13.         opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
  14.         self.shapes = [v.shape for v in opt_param_values]
  15.         self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
  16.         self.components = opt_params['lSigma'].shape[2] if components is None else components
  17.         self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
  18.         self.callback_counter = [0]
  19.         if batch_size is not None:
  20.             if parallel:
  21.                 self.pool = multiprocessing.Pool(int(self.N / self.batch_size))
  22.             else:
  23.                 self.params = np.concatenate([v.flatten() for v in opt_param_values])
  24.                 self.param_updates = np.zeros_like(self.params)
  25.                 self.moving_mean_squared = np.zeros_like(self.params)
  26.                 self.learning_rates = 1e-2*np.ones_like(self.params)
  27.                
  28.     def extend(self, x, y, z = {}):
  29.        
  30.         return dict(x.items() + y.items() + z.items())
  31.    
  32.     def eval_f_LL(self, arguments):
  33.         out_f = self.vssgp.f['LL'](**arguments)
  34.         return (out_f)
  35.  
  36.     def eval_g_LL(self, arguments):
  37.         out_g = self.vssgp.g['LL'](**arguments)
  38.         return (out_g)
  39.  
  40.     def unpack(self, x):
  41.         x_param_values = [x[self.sizes[i-1]:self.sizes[i]].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
  42.         params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
  43.         if 'ltau' in params:
  44.             params['ltau'] = params['ltau'].squeeze()
  45.         return params
  46.  
  47.     def func(self, x):
  48.         params = self.extend(self.fixed_params, self.unpack(x))
  49.         if self.batch_size is not None:
  50.             X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
  51.             if self.parallel:
  52.                 arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
  53.                 LL = sum(self.pool.map_async(self.eval_f_LL, arguments).get(9999999))
  54.                 KL = self.vssgp.f['KL'](**self.extend({'X': [[0]], 'Y': [[0]]}, params))
  55.             else:
  56.                 split = np.random.randint(splits)
  57.                 LL = self.N / self.batch_size * self.vssgp.f['LL'](**self.extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
  58.                 print LL
  59.                 KL = self.vssgp.f['KL'](**self.extend({'X': [[0]], 'Y': [[0]]}, params))
  60.         else:
  61.             params = self.extend(self.inputs, params)
  62.             LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
  63.         return -(LL - KL)
  64.  
  65.     def fprime(self, x):
  66.         grads, params = [], self.extend(self.fixed_params, self.unpack(x))
  67.         for n in self.opt_param_names:
  68.             if self.batch_size is not None:
  69.                 X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
  70.                 if self.parallel:
  71.                     arguments = [(n, X[i::splits], Y[i::splits], params) for i in xrange(splits)]
  72.                     dLL = sum(self.pool.map_async(self.eval_g_LL, arguments).get(9999999))
  73.                     dKL = self.vssgp.g[n]['KL'](**self.extend({'X': [[0]], 'Y': [[0]]}, params))
  74.                 else:
  75.                     split = np.random.randint(splits)
  76.                     dLL = self.N / self.batch_size * self.vssgp.g[n]['LL'](**self.extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
  77.                     dKL = self.vssgp.g[n]['KL'](**self.extend({'X': [[0]], 'Y': [[0]]}, params))
  78.             else:
  79.                 params = self.extend(self.inputs, params)
  80.                 dLL, dKL = self.vssgp.g[n]['LL'](**params), self.vssgp.g[n]['KL'](**params)
  81.             grads += [-(dLL - dKL)]
  82.         return np.concatenate([grad.flatten() for grad in grads])
  83.  
  84.     def callback(self, x):
  85.         if self.callback_counter[0]%self.print_interval == 0:
  86.             opt_params = self.unpack(x)
  87.             params = self.extend(self.inputs, self.fixed_params, opt_params)
  88.             LL = self.vssgp.f['LL'](**params)
  89.             KL = self.vssgp.f['KL'](**params)
  90.             print LL - KL
  91.         self.callback_counter[0] += 1
  92.  


verändert. Vor allem die globalen Variablen sind nun weg.
Nun erhalte ich aber diesen Fehler:

Code: Alles auswählen

Traceback (most recent call last):

  File "<ipython-input-4-f919e99b6eea>", line 1, in <module>
    if __name__== '__main__' : STARTME()

  File "C:/Users/flo9fe/Desktop/vSSGP_LVM/vssgp_example.py", line 48, in STARTME
    options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)

  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\_minimize.py", line 450, in minimize
    callback=callback, **options)

  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\lbfgsb.py", line 328, in _minimize_lbfgsb
    f, g = func_and_grad(x)

  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\lbfgsb.py", line 278, in func_and_grad
    f = fun(x, *args)

  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\optimize.py", line 292, in function_wrapper
    return function(*(wrapper_args + args))

  File "vssgp_opt.py", line 53, in func
    LL = sum(self.pool.map_async(self.eval_f_LL, arguments).get(9999999))

  File "C:\Program Files\Anaconda2\lib\multiprocessing\pool.py", line 567, in get
    raise self._value

PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Re: Fehlermeldung

Beitragvon Romaxx » Montag 27. Februar 2017, 13:26

Hallo,

ich pushe diese Nachricht jetzt einfach mal, in der Hoffnung, dass jemand, der sich mit Theano und GPU auskennt, hier noch eine Antwort geben kann.
Falls dies nicht erlaubt ist, bitte löschen.

Grüße Romaxx

Zurück zu „Allgemeine Fragen“

Wer ist online?

Mitglieder in diesem Forum: Bing [Bot]