Fehlermeldung

Wenn du dir nicht sicher bist, in welchem der anderen Foren du die Frage stellen sollst, dann bist du hier im Forum für allgemeine Fragen sicher richtig.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Hallo zusammen,

ich bekomme folgenden Fehler:

Code: Alles auswählen

 File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 888, in debugfile
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
    debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir))
  File "C:\Program Files\Anaconda2\lib\bdb.py", line 400, in run
    exec cmd in globals, locals
  File "<string>", line 1, in <module>
  File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
    execfile(filename, namespace)
  File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
    exec(compile(scripttext, filename, 'exec'), glob, loc)
  File "c:/users/flo9fe/desktop/vssgp_lvm/vssgp_example.py", line 50, in <module>
    options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\_minimize.py", line 450, in minimize
    callback=callback, **options)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\lbfgsb.py", line 328, in _minimize_lbfgsb
    f, g = func_and_grad(x)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\lbfgsb.py", line 278, in func_and_grad
    f = fun(x, *args)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\optimize.py", line 292, in function_wrapper
    return function(*(wrapper_args + args))
  File "vssgp_opt.py", line 53, in func
    LL = sum(pool.map_async(eval_f_LL, arguments).get(9999999))
AttributeError: 'NoneType' object has no attribute 'map_async
Die entsprechende Funktion und der Fehler sind in Zeile 53 zu finden:

Code: Alles auswählen

import numpy as np
from vssgp_model import VSSGP
import pylab
import multiprocessing
def extend(x, y, z = {}):
    return dict(x.items() + y.items() + z.items())
pool, global_f, global_g = None, None, None
def eval_f_LL(X, Y, params):
    return global_f['LL'](**extend({'X': X, 'Y': Y}, params))
def eval_g_LL(name, X, Y, params):
    return global_g[name]['LL'](**extend({'X': X, 'Y': Y}, params))

class VSSGP_opt():
    def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
                 parallel = False, batch_size = None, components = None, print_interval = None):
        self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
        self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
        self.inputs, self.test_set = inputs, test_set
        self.print_interval = 10 if print_interval is None else print_interval
        self.opt_param_names = [n for n,_ in opt_params.iteritems()]
        opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
        self.shapes = [v.shape for v in opt_param_values]
        self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
        self.components = opt_params['lSigma'].shape[2] if components is None else components
        self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
        self.callback_counter = [0]
        if batch_size is not None:
            if parallel:
                global pool, global_f, global_g
                global_f, global_g = self.vssgp.f, self.vssgp.g
                if __name__ == '__main__':
                    pool = multiprocessing.Pool(int(self.N / self.batch_size))
            else:
                self.params = np.concatenate([v.flatten() for v in opt_param_values])
                self.param_updates = np.zeros_like(self.params)
                self.moving_mean_squared = np.zeros_like(self.params)
                self.learning_rates = 1e-2*np.ones_like(self.params)


    def unpack(self, x):
        x_param_values = [x[self.sizes[i-1]:self.sizes[i]].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
        params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
        if 'ltau' in params:
            params['ltau'] = params['ltau'].squeeze()
        return params

    def func(self, x):
        params = extend(self.fixed_params, self.unpack(x))
        if self.batch_size is not None:
            X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
            if self.parallel:
                arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
                LL = sum(pool.map_async(eval_f_LL, arguments).get(9999999))
                KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
            else:
                split = np.random.randint(splits)
                LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
                print(LL)
                KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
        else:
            params = extend(self.inputs, params)
            LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
        return -(LL - KL)
Was ist hier nicht richtig?
BlackJack

@Romaxx: Die Verwendung von globalen Variablen ist nicht richtig. Wenn Du das Schlüsselwort ``global`` verwendest, machst Du in 99,9999% der Fälle etwas falsches.

Den ``if __name__ == '__main__':``-Test habe ich noch nie irgendwo tief in einer Funktion gesehen. Das solltest Du sein lassen. Die Funktionen/Methoden eines Moduls sollten sich gleich verhalten, egal ob das Modul importiert oder als Programm ausgeführt wird. Sonst wird Testen lustig, weil es sich dann bei Tests ja anders verhält als beim Ausführen.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Hallo,

danke für deine Anwort.

Ich habe nun folgendes gemacht:

[codebox=python file=Unbenannt.txt]import numpy as np
from vssgp_model import VSSGP
import pylab
import multiprocessing
def extend(x, y, z = {}):
return dict(x.items() + y.items() + z.items())
pool, global_f, global_g = None, None, None
def eval_f_LL(X, Y, params):
return global_f['LL'](**extend({'X': X, 'Y': Y}, params))
def eval_g_LL(name, X, Y, params):
return global_g[name]['LL'](**extend({'X': X, 'Y': Y}, params))

class VSSGP_opt():
def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
parallel = False, batch_size = None, components = None, print_interval = None):
self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
self.inputs, self.test_set = inputs, test_set
self.print_interval = 10 if print_interval is None else print_interval
self.opt_param_names = [n for n,_ in opt_params.iteritems()]
opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
self.shapes = [v.shape for v in opt_param_values]
self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
self.components = opt_params['lSigma'].shape[2] if components is None else components
self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
self.callback_counter = [0]
if batch_size is not None:
if parallel:
self.pool = multiprocessing.Pool(int(self.N / self.batch_size))
else:
self.params = np.concatenate([v.flatten() for v in opt_param_values])
self.param_updates = np.zeros_like(self.params)
self.moving_mean_squared = np.zeros_like(self.params)
self.learning_rates = 1e-2*np.ones_like(self.params)


def unpack(self, x):
x_param_values = [x[self.sizes[i-1]:self.sizes].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
if 'ltau' in params:
params['ltau'] = params['ltau'].squeeze()
return params

def func(self, x):
params = extend(self.fixed_params, self.unpack(x))
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
LL = sum(self.pool.map_async(eval_f_LL, arguments).get(9999999))
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
print(LL)
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
return -(LL - KL)[/code]

Und erhalte diesen Fehler:

Code: Alles auswählen

RuntimeError: 
            Attempt to start a new process before the current process
            has finished its bootstrapping phase.

            This probably means that you are on Windows and you have
            forgotten to use the proper idiom in the main module:

                if __name__ == '__main__':
                    freeze_support()
                    ...

            The "freeze_support()" line can be omitted if the program
            is not going to be frozen to produce a Windows executable.
Wenn ich

[codebox=pycon file=Unbenannt.txt] if __name__ == '__main__':
[/code]

an besagte Stelle wieder einfüge, erhalte ich:

Code: Alles auswählen

File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 888, in debugfile
    debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir))
  File "C:\Program Files\Anaconda2\lib\bdb.py", line 400, in run
    exec cmd in globals, locals
  File "<string>", line 1, in <module>
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>


  File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
    execfile(filename, namespace)
  File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
    exec(compile(scripttext, filename, 'exec'), glob, loc)
  File "c:/users/flo9fe/desktop/vssgp_lvm/vssgp_example.py", line 50, in <module>
    options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\_minimize.py", line 450, in minimize
    callback=callback, **options)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\lbfgsb.py", line 328, in _minimize_lbfgsb
    f, g = func_and_grad(x)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\lbfgsb.py", line 278, in func_and_grad
    f = fun(x, *args)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\optimize.py", line 292, in function_wrapper
    return function(*(wrapper_args + args))
  File "vssgp_opt.py", line 52, in func
    LL = sum(self.pool.map_async(eval_f_LL, arguments).get(9999999))
AttributeError: VSSGP_opt instance has no attribute 'pool'
BlackJack

@Romaxx: Was ist ”besagte” Stelle? Ausserdem verwendest Du immer noch globale Datenstrukturen, die es in den anderen Prozessen nicht geben wird.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Entschuldige die Ungenauigkeit.
Ich meine

[codebox=python file=Unbenannt.txt]if __name__ == '__main__':[/code]

in Zeile 28/29 wieder eingefügt.

Ich habe nun mein File zu folgendem geändert:

[codebox=python file=Unbenannt.txt]import numpy as np
from vssgp_model import VSSGP
import pylab
import multiprocessing
def extend(x, y, z = {}):
return dict(x.items() + y.items() + z.items())
def eval_f_LL(X, Y, params):
return VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
def eval_g_LL(name, X, Y, params):
return VSSGP.g[name]['LL'](**extend({'X': X, 'Y': Y}, params))

class VSSGP_opt():
def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
parallel = False, batch_size = None, components = None, print_interval = None):
self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
self.inputs, self.test_set = inputs, test_set
self.print_interval = 10 if print_interval is None else print_interval
self.opt_param_names = [n for n,_ in opt_params.iteritems()]
opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
self.shapes = [v.shape for v in opt_param_values]
self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
self.components = opt_params['lSigma'].shape[2] if components is None else components
self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
self.callback_counter = [0]
if batch_size is not None:
if parallel:
self.pool = multiprocessing.Pool(int(self.N / self.batch_size))
else:
self.params = np.concatenate([v.flatten() for v in opt_param_values])
self.param_updates = np.zeros_like(self.params)
self.moving_mean_squared = np.zeros_like(self.params)
self.learning_rates = 1e-2*np.ones_like(self.params)


def unpack(self, x):
x_param_values = [x[self.sizes[i-1]:self.sizes].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
if 'ltau' in params:
params['ltau'] = params['ltau'].squeeze()
return params

def func(self, x):
params = extend(self.fixed_params, self.unpack(x))
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
LL = sum(self.pool.map_async(eval_f_LL, arguments).get(9999999))
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
print(LL)
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
return -(LL - KL)[/code]

Ich bekomme aber ohne

[codebox=python file=Unbenannt.txt]if __name__ == '__main__':[/code]

in 28/29 wieder den Fehler

Code: Alles auswählen

RuntimeError:
            Attempt to start a new process before the current process
            has finished its bootstrapping phase.

            This probably means that you are on Windows and you have
            forgotten to use the proper idiom in the main module:

                if __name__ == '__main__':
                    freeze_support()
                    ...

            The "freeze_support()" line can be omitted if the program
            is not going to be frozen to produce a Windows executable.
und mit

Code: Alles auswählen

File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 888, in debugfile
    debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir))
  File "C:\Program Files\Anaconda2\lib\bdb.py", line 400, in run
    exec cmd in globals, locals
  File "<string>", line 1, in <module>
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>


  File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
    execfile(filename, namespace)
  File "C:\Program Files\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
    exec(compile(scripttext, filename, 'exec'), glob, loc)
  File "c:/users/flo9fe/desktop/vssgp_lvm/vssgp_example.py", line 50, in <module>
    options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\_minimize.py", line 450, in minimize
    callback=callback, **options)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\lbfgsb.py", line 328, in _minimize_lbfgsb
    f, g = func_and_grad(x)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\lbfgsb.py", line 278, in func_and_grad
    f = fun(x, *args)
  File "C:\Program Files\Anaconda2\lib\site-packages\scipy\optimize\optimize.py", line 292, in function_wrapper
    return function(*(wrapper_args + args))
  File "vssgp_opt.py", line 52, in func
    LL = sum(self.pool.map_async(eval_f_LL, arguments).get(9999999))
AttributeError: VSSGP_opt instance has no attribute 'pool'
ich muss dazu sagen, dass das nicht mein Code ist, ich möchte ihn aber zum Laufen bekommen, da es sich um eine Demo handelt (höchstwahrscheinlich für linux optimiert).

Grüße
BlackJack

@Romaxx: In der Fehlermeldung (und der Dokumentaton von `multiprocessing`) steht, dass das Hauptmodul, also das was als Programm ausgeführt wird, so abgesichert werden muss. Was man sowieso tun sollte, auch wenn man nicht multiprocessing verwendet.

Und ich meinte auch nicht das Du einfach die ``if``-Zeile raus löschst, das verändert dann natürlich das Verhalten des Programms, sondern das man das insgesamt so nicht schreiben würde. Also ich zumindest nicht.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Ok, vergessen wir mal meine Änderungen.
Ich habe folgende Datei:

[codebox=python file=Unbenannt.txt]import numpy as np
from vssgp_model import VSSGP
import multiprocessing
def extend(x, y, z = {}):
return dict(x.items() + y.items() + z.items())
pool, global_f, global_g = None, None, None
def eval_f_LL(X, Y, params):
return global_f['LL'](**extend({'X': X, 'Y': Y}, params))
def eval_g_LL(name, X, Y, params):
return global_g[name]['LL'](**extend({'X': X, 'Y': Y}, params))

class VSSGP_opt():
def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
parallel = False, batch_size = None, components = None, print_interval = None):
self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
self.inputs, self.test_set = inputs, test_set
self.print_interval = 10 if print_interval is None else print_interval
self.opt_param_names = [n for n,_ in opt_params.iteritems()]
opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
self.shapes = [v.shape for v in opt_param_values]
self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
self.components = opt_params['lSigma'].shape[2] if components is None else components
self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
self.callback_counter = [0]
if 'train_ind' not in test_set:
print('train_ind not found!')
self.test_set['train_ind'] = np.arange(inputs['X'].shape[0]).astype(int)
self.test_set['test_ind'] = np.arange(0).astype(int)
if batch_size is not None:
if parallel:
global pool, global_f, global_g
global_f, global_g = self.vssgp.f, self.vssgp.g
pool = multiprocessing.Pool(int(self.N / self.batch_size))
else:
self.params = np.concatenate([v.flatten() for v in opt_param_values])
self.param_updates = np.zeros_like(self.params)
self.moving_mean_squared = np.zeros_like(self.params)
self.learning_rates = 1e-2*np.ones_like(self.params)


def unpack(self, x):
x_param_values = [x[self.sizes[i-1]:self.sizes].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
if 'ltau' in params:
params['ltau'] = params['ltau'].squeeze()
return params

def func(self, x):
params = extend(self.fixed_params, self.unpack(x))
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
LL = sum(pool.map_async(eval_f_LL, arguments).get(9999999))
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
print(LL)
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
return -(LL - KL)

def fprime(self, x):
grads, params = [], extend(self.fixed_params, self.unpack(x))
for n in self.opt_param_names:
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(n, X[i::splits], Y[i::splits], params) for i in xrange(splits)]
dLL = sum(pool.map_async(eval_g_LL, arguments).get(9999999))
dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
dLL = self.N / self.batch_size * self.vssgp.g[n]['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
dLL, dKL = self.vssgp.g[n]['LL'](**params), self.vssgp.g[n]['KL'](**params)
grads += [-(dLL - dKL)]
return np.concatenate([grad.flatten() for grad in grads])

def callback(self, x):
if self.callback_counter[0]%self.print_interval == 0:
opt_params = self.unpack(x)
params = extend(self.inputs, self.fixed_params, opt_params)
LL = self.vssgp.f['LL'](**params)
KL = self.vssgp.f['KL'](**params)
print(LL - KL)
self.callback_counter[0] += 1[/code]

Diese erhält lediglich über self.vssgp.g bzw. self.vssgp.f die Funktionen, die ausgeführt werden sollen.

Wie ändere ich diese Datei, sodass es parallelisiert läuft?

Kannst du mir hier helfen?

Ich möchte es unter Windows zum Laufen bekommen.

Das vollständige Funkionenpaket der Demo findet sich hier: https://github.com/yaringal/VSSGP

wobei eigentlich nur die hier zitierte die wichtige für das parallelisieren ist.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Kannst du mir wenigstens mitteilen, wie du es schreiben würdest.

Vielleicht bekomme ich dann ein besseres Gespür, wie ich es dann umschreiben kann.
BlackJack

@Romaxx: Ich würde es so schreiben wie die `multiprocessing`-API es verlangt. Das Modul das als Programm ausgeführt wird, muss sich ohne Effekte importieren lassen. Das ist ja *so gar nicht erfüllt*. Der gesamte Code steht einfach auf Modulebene. Der Code gehört in eine Funktion und die dann mit dem ``if __name__ == '__main__':``-Idiom geschützt.

Wenn es dann nicht funktioniert, würde ich es entweder erst einmal unter Linux testen, oder beim Autor des Codes nachfragen.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Hallo,

danke für deine Antwort.

Mir fällt es schwer dir zu folgen. Was meinst du mir 'ohne Effekte importieren'. Mir sind solche Begriffe leider nicht bekannt.
Und ' Der Code gehört in eine Funktion und die dann mit dem ``if __name__ == '__main__':``-Idiom geschützt'.
Welchen Code meinst du hier? Diesen vielleicht : pool = multiprocessing.Pool(int(self.N / self.batch_size)).
Du meinst also:

[codebox=python file=Unbenannt.txt]class VSSGP_opt():
def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
parallel = False, batch_size = None, components = None, print_interval = None):
self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
self.inputs, self.test_set = inputs, test_set
self.print_interval = 10 if print_interval is None else print_interval
self.opt_param_names = [n for n,_ in opt_params.iteritems()]
opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
self.shapes = [v.shape for v in opt_param_values]
self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
self.components = opt_params['lSigma'].shape[2] if components is None else components
self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
self.callback_counter = [0]
if batch_size is not None:
if parallel:
global pool, global_f, global_g
global_f, global_g = self.vssgp.f, self.vssgp.g
if __name__ == '__main__':
multiprocessing.freeze_support()
pool = multiprocessing.Pool(int(self.N / self.batch_size))
else:
self.params = np.concatenate([v.flatten() for v in opt_param_values])
self.param_updates = np.zeros_like(self.params)
self.moving_mean_squared = np.zeros_like(self.params)
self.learning_rates = 1e-2*np.ones_like(self.params)
def multiprocess(self):
if __name__ == '__main__':
pool = multiprocessing.Pool(int(self.N / self.batch_size))
return (pool)
[/code]

Und 'pool' dann unter 'def func(self, x):' aufrufen?

[codebox=python file=Unbenannt.txt]def func(self, x):
params = extend(self.fixed_params, self.unpack(x))
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
pool = self.multiprocess()
LL = sum(pool.map_async(eval_f_LL, arguments).get(9999999))
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
print LL
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
return -(LL - KL)[/code]

Das Problem ist, wenn ich das so umsetze, geht der Programmdurchlauf nicht durch 'if __name__ == '__main__':', sodass ich kein 'pool' in 'multiprocess' ausgeben kann.
BlackJack

@Romaxx: Ich meine den Code in dem Modul das als Programm ausgeführt wird. Den gesamten. Ohne Effekt importieren heisst, man kann ein Modul importieren ohne das irgendwas passiert (ausser das Konstanten, Funktionen, und Klassen definiert werden). Das sollte in einem sauberen Programm für jedes Modul gelten. Bei `multiprocessing` ist das dann zum Beispiel sehr wichtig wie man sieht. Aber auch zum Testen, automatisiert oder manuell für die Fehlersuche, und für einige Werkzeuge, zum Beispiel zur Dokumentationserstellung aus dem Code, ist das importieren eines Moduls ohne dass da irgendein grösseres Programm abläuft oder gar Dateien oder Datenbankverbindungen geöffnet, Hardware angesprochen, externe Prozesse gestartet, … werden, wichtig.

Also wenn Du in das Verzeichnis wechselst, eine Python-Shell startest, und dort ``import VSSGP_example`` eingibst, dann darf nichts weiter passieren als dass das Modul importiert wird und in dem Modul ggf. Konstanten, Funktionen, und Klassen definiert werden. Das gilt transitiv, das heisst auch Module die in der Folge des importierens importiert werden, dürfen keine weiteren Effekte haben. Das ist eine Grundbedingung die das `multiprocessing`-Modul stellt. Zumindest auf Plattformen die kein `fork()` á la Unix kennen. Auf solchen Plattformen werden für das Multiprocessing nämlich neue Prozesse gestartet und das Modul das als Programm gestartet wurde, wird in diesen Prozessen importiert um eine möglichst ähnliche ”Umgebung” bereit zu stellen.

Also mindestens mal alles ab Zeile 9 (inklusive) in dem Modul gehört in eine Funktion die nur aufgerufen wird wenn das Modul als Programm ausgeführt wird.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

D.h. du meinst vssgp_model.f.
Diese Funktion wird doch aber in

[codebox=python file=Unbenannt.txt]import numpy as np
from vssgp_model import VSSGP
import multiprocessing
def extend(x, y, z = {}):
return dict(x.items() + y.items() + z.items())
global_f, global_g = None, None
def eval_f_LL((X, Y, params)):
return global_f['LL'](**extend({'X': X, 'Y': Y}, params))
def eval_g_LL((name, X, Y, params)):
return global_g[name]['LL'](**extend({'X': X, 'Y': Y}, params))

class VSSGP_opt():
def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
parallel = False, batch_size = None, components = None, print_interval = None):
self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
self.inputs, self.test_set = inputs, test_set
self.print_interval = 10 if print_interval is None else print_interval
self.opt_param_names = [n for n,_ in opt_params.iteritems()]
opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
self.shapes = [v.shape for v in opt_param_values]
self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
self.components = opt_params['lSigma'].shape[2] if components is None else components
self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
self.callback_counter = [0]
if batch_size is not None:
if parallel:
global global_f, global_g
global_f, global_g = self.vssgp.f, self.vssgp.g
else:
self.params = np.concatenate([v.flatten() for v in opt_param_values])
self.param_updates = np.zeros_like(self.params)
self.moving_mean_squared = np.zeros_like(self.params)
self.learning_rates = 1e-2*np.ones_like(self.params)

def multiprocess(self):
if __name__ == '__main__':
pool = multiprocessing.Pool(int(self.N / self.batch_size))
return (pool)


def unpack(self, x):
x_param_values = [x[self.sizes[i-1]:self.sizes].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
if 'ltau' in params:
params['ltau'] = params['ltau'].squeeze()
return params

def func(self, x):
params = extend(self.fixed_params, self.unpack(x))
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
pool = self.multiprocess()
LL = sum(pool.map_async(eval_f_LL, arguments).get(9999999))
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
print LL
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
return -(LL - KL)[/code]

Zeile 2,15 und 29 lediglich geladen und nicht ausgeführt.
Das Problem mit dieser Funktion vssgp_model.f ist, dass es sich hierbei um eine theano compilierte Funktion handelt, die für die Berechnung z.b. der Gradient oder des Funktionswertes auf effizienter Basis erstellt wurde, d.h. dort Änderungen zu unternehmen, ist wahrscheinlich nicht einfach.
BlackJack

@Romaxx: Wieso meine ich ``vssgp_model.f``? Was passiert(e) denn nach der nötigen Änderung um den Code in `VSSGP_example` vor dem Ausführen beim Importieren zu schützen?
Benutzeravatar
Kebap
User
Beiträge: 686
Registriert: Dienstag 15. November 2011, 14:20
Wohnort: Dortmund

Anscheinend fehlen hier Grundlagen zum Thema Python Module importieren
MorgenGrauen: 1 Welt, 8 Rassen, 13 Gilden, >250 Abenteuer, >5000 Waffen & Rüstungen,
>7000 NPC, >16000 Räume, >200 freiwillige Programmierer, nur Text, viel Spaß, seit 1992.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Ok, jetzt bin auch ich etwas verwirrt.
Können wir noch einmal von vorne beginnen?
Ich habe mir den Link von Kebap nun durchgelesen und hoffe damit mit einer Unterstützng weiter zu kommen.

Also, ich habe folgendes Modul für eine Optimierung einer Funktion mit Theano.

[codebox=python file=Unbenannt.txt]import numpy as np
from vssgp_model import VSSGP
import multiprocessing
def extend(x, y, z = {}):
return dict(x.items() + y.items() + z.items())
pool, global_f, global_g = None, None, None
def eval_f_LL(X, Y, params):
return global_f['LL'](**extend({'X': X, 'Y': Y}, params))
def eval_g_LL(name, X, Y, params):
return global_g[name]['LL'](**extend({'X': X, 'Y': Y}, params))

class VSSGP_opt():
def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
parallel = False, batch_size = None, components = None, print_interval = None):
self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
self.inputs, self.test_set = inputs, test_set
self.print_interval = 10 if print_interval is None else print_interval
self.opt_param_names = [n for n,_ in opt_params.iteritems()]
opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
self.shapes = [v.shape for v in opt_param_values]
self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
self.components = opt_params['lSigma'].shape[2] if components is None else components
self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
self.callback_counter = [0]
if 'train_ind' not in test_set:
print('train_ind not found!')
self.test_set['train_ind'] = np.arange(inputs['X'].shape[0]).astype(int)
self.test_set['test_ind'] = np.arange(0).astype(int)
if batch_size is not None:
if parallel:
global pool, global_f, global_g
global_f, global_g = self.vssgp.f, self.vssgp.g
pool = multiprocessing.Pool(int(self.N / self.batch_size))
else:
self.params = np.concatenate([v.flatten() for v in opt_param_values])
self.param_updates = np.zeros_like(self.params)
self.moving_mean_squared = np.zeros_like(self.params)
self.learning_rates = 1e-2*np.ones_like(self.params)


def unpack(self, x):
x_param_values = [x[self.sizes[i-1]:self.sizes].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
if 'ltau' in params:
params['ltau'] = params['ltau'].squeeze()
return params

def func(self, x):
params = extend(self.fixed_params, self.unpack(x))
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
LL = sum(pool.map_async(eval_f_LL, arguments).get(9999999))
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
print(LL)
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
return -(LL - KL)

def fprime(self, x):
grads, params = [], extend(self.fixed_params, self.unpack(x))
for n in self.opt_param_names:
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(n, X[i::splits], Y[i::splits], params) for i in xrange(splits)]
dLL = sum(pool.map_async(eval_g_LL, arguments).get(9999999))
dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
dLL = self.N / self.batch_size * self.vssgp.g[n]['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
dLL, dKL = self.vssgp.g[n]['LL'](**params), self.vssgp.g[n]['KL'](**params)
grads += [-(dLL - dKL)]
return np.concatenate([grad.flatten() for grad in grads])

def callback(self, x):
if self.callback_counter[0]%self.print_interval == 0:
opt_params = self.unpack(x)
params = extend(self.inputs, self.fixed_params, opt_params)
LL = self.vssgp.f['LL'](**params)
KL = self.vssgp.f['KL'](**params)
print(LL - KL)
self.callback_counter[0] += 1[/code]

Beim erstmaligen ausführen compiliert mir Theano den Code meiner zu optimierenden Funktion und ich kann durch vssgp_model.f und vssgp_model.g jeweils den Funktionswert der zu optimierenden Funktion und den Gradienten ausgeben ( natürlich mit einem gewissen input, z.B. vssgp_model.f'['LL'](**params); params ist eine Liste mit Variablen).

Wie du siehst, ist in Zeile 33,34 kein

[codebox=python file=Unbenannt.txt]if __name__ == '__main__':[/code]

zu finden. Das ist der Demo-Code, d.h. ich habe hier nichts geändert. Ich hatte bei meinem aller ersten Post das aber drin gehabt, eben weil ich auch in der Multiprocessing Doku gelesen habe, dass man das eigentlich mit einfügen sollte. Das ging straightforward aber dann schief, wie fast zu erwarten war.

Ich glaube, bitte korrigiere mich, wenn ich falsch liege, dass ich die globalen Variablen durch

[codebox=python file=Unbenannt.txt] import numpy as np
from vssgp_model import VSSGP
import pylab
import multiprocessing
def extend(x, y, z = {}):
return dict(x.items() + y.items() + z.items())
def eval_f_LL(X, Y, params):
out_f = VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
return out_f
def eval_g_LL(name, X, Y, params):
out_g = VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
return out_g

class VSSGP_opt():
def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
parallel = False, batch_size = None, components = None, print_interval = None):
self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
self.inputs, self.test_set = inputs, test_set
self.print_interval = 10 if print_interval is None else print_interval
self.opt_param_names = [n for n,_ in opt_params.iteritems()]
opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
self.shapes = [v.shape for v in opt_param_values]
self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
self.components = opt_params['lSigma'].shape[2] if components is None else components
self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
self.callback_counter = [0]
if batch_size is not None:
if parallel:
self.pool = multiprocessing.Pool(int(self.N / self.batch_size))
else:
self.params = np.concatenate([v.flatten() for v in opt_param_values])
self.param_updates = np.zeros_like(self.params)
self.moving_mean_squared = np.zeros_like(self.params)
self.learning_rates = 1e-2*np.ones_like(self.params)


def unpack(self, x):
x_param_values = [x[self.sizes[i-1]:self.sizes].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
if 'ltau' in params:
params['ltau'] = params['ltau'].squeeze()
return params

def func(self, x):
params = extend(self.fixed_params, self.unpack(x))
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
LL = sum(self.pool.map_async(eval_f_LL, arguments).get(9999999))
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
print(LL)
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
return -(LL - KL)

def fprime(self, x):
grads, params = [], extend(self.fixed_params, self.unpack(x))
for n in self.opt_param_names:
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(n, X[i::splits], Y[i::splits], params) for i in xrange(splits)]
dLL = sum(self.pool.map_async(eval_g_LL, arguments).get(9999999))
dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
dLL = self.N / self.batch_size * self.vssgp.g[n]['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
dLL, dKL = self.vssgp.g[n]['LL'](**params), self.vssgp.g[n]['KL'](**params)
grads += [-(dLL - dKL)]
return np.concatenate([grad.flatten() for grad in grads])

def callback(self, x):
if self.callback_counter[0]%self.print_interval == 0:
opt_params = self.unpack(x)
params = extend(self.inputs, self.fixed_params, opt_params)
LL = self.vssgp.f['LL'](**params)
KL = self.vssgp.f['KL'](**params)
print(LL - KL)
self.callback_counter[0] += 1[/code]

wegbekomme.

An der Stelle bin ich ausgestiegen.
Wie schütze ich WAS vor der Ausführung?
Entschuldige mein womöglich schlechtes Auffassungsvermögen.

Danke und Grüße
BlackJack

@Romaxx: Dieses Modul ist der falsche Ausgangspunkt. Du musst für `multiprocessing` den Import desjenigen Moduls ”effektfrei” importierbar bekommen, welches als *Programm ausgeführt* wird. Und welche Zeilen dort von Modulebene in eine Funktion gesteckt werden müssen die nur bei Programmausführung, aber nicht beim Import ausgeführt wird, habe ich schon mal gesagt. Im Grunde alles nach den Importen. Denn sonst wird das von jedem Prozess den `multiprocessing` parallel startet, wieder ausgeführt, denn `multiprocessing` importiert dieses Modul als erstes im neuen Prozess. Da darf dann nicht wieder die ganze Berechnung beginnen als wäre *das* das Hauptprogramm.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Danke für deine Geduld.

Nur fällt es mir immer noch schwer, die richtige Reihenfolge oder Vorgehensweise zu verstehen.

Ich glaube wenigstens jetzt zu verstehen, dass du damit eigentlich die gesamte class VSSGP_opt(): meinst.

Wie machen ich dass aber? Das folgende ist sicher nicht korrekt:

[codebox=python file=Unbenannt.txt]
def new_function():
class VSSGP_opt():
def __init__
....
[/code]

Kannst du mir grob sagen, wie die Hierarchie in Form von 'Pseudo-Code' aussehen sollte?

Ich will nicht, dass du mir meine Arbeit abnimmst, ich bitte dich lediglich um ein Minimalbeispiel.

Falls es dir hilft, ich führe

[codebox=python file=Unbenannt.txt]res = minimize(vssgp_opt.func, x0, method='L-BFGS-B', jac=vssgp_opt.fprime,
options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)[/code]

aus und vssgp_model ist

[codebox=python file=Unbenannt.txt]# To speed Theano up, create ram disk: mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk
# Then use flag THEANO_FLAGS='base_compiledir=/mnt/ramdisk' python script.py
import sys; sys.path.insert(0, "../Theano"); sys.path.insert(0, "../../Theano")
import theano; import theano.tensor as T; import theano.sandbox.linalg as sT
import numpy as np
import cPickle

print('Theano version: ' + theano.__version__ + ', base compile dir: ' + theano.config.base_compiledir)
theano.config.mode = 'FAST_RUN'
theano.config.optimizer = 'fast_run'
theano.config.reoptimize_unpickled_function = False

class VSSGP:
def __init__(self, use_exact_A = False):
try:
print('Trying to load model...')
with open('model_exact_A.save' if use_exact_A else 'model.save', 'rb') as file_handle:
self.f, self.g = cPickle.load(file_handle)
print('Loaded!')
return
except:
print('Failed. Creating a new model...')

print('Setting up variables...')
Z, mu, lSigma = T.dtensor3s('Z', 'mu', 'lSigma')
X, Y, m, ls, lhyp, lalpha, lalpha_delta, a = T.dmatrices('X', 'Y', 'm', 'ls', 'lhyp', 'lalpha', 'lalpha_delta', 'a')
b = T.dvector('b')
ltau = T.dscalar('ltau')
Sigma, alpha, alpha_delta, tau = T.exp(lSigma), T.exp(lalpha), T.exp(lalpha_delta), T.exp(ltau)
alpha = alpha % 2*np.pi
beta = T.minimum(alpha + alpha_delta, 2*np.pi)
(N, Q), D, K = X.shape, Y.shape[1], mu.shape[1]
sf2s, lss, ps = T.exp(lhyp[0]), T.exp(lhyp[1:1+Q]), T.exp(lhyp[1+Q:]) # length-scales abd periods

print('Setting up model...')
LL, KL, Y_pred_mean, Y_pred_var, EPhi, EPhiTPhi, opt_A_mean, opt_A_cov = self.get_model_exact_A(Y, X, Z, alpha, beta, mu, Sigma, m, ls, sf2s, lss, ps, tau, a, b, N, Q, D, K)

print('Compiling model...')
inputs = {'X': X, 'Y': Y, 'Z': Z, 'mu': mu, 'lSigma': lSigma, 'm': m, 'ls': ls, 'lalpha': lalpha,
'lalpha_delta': lalpha_delta, 'lhyp': lhyp, 'ltau': ltau, 'a': a, 'b': b}
z = 0.0*sum([T.sum(v) for v in inputs.values()]) # solve a bug with derivative wrt inputs not in the graph
f = zip(['opt_A_mean', 'opt_A_cov', 'EPhi', 'EPhiTPhi', 'Y_pred_mean', 'Y_pred_var', 'LL', 'KL'],
[opt_A_mean, opt_A_cov, EPhi, EPhiTPhi, Y_pred_mean, Y_pred_var, LL, KL])
self.f = {n: theano.function(inputs.values(), f+z, name=n, on_unused_input='ignore') for n,f in f}
g = zip(['LL', 'KL'], [LL, KL])
wrt = {'Z': Z, 'mu': mu, 'lSigma': lSigma, 'm': m, 'ls': ls, 'lalpha': lalpha,
'lalpha_delta': lalpha_delta, 'lhyp': lhyp, 'ltau': ltau, 'a': a, 'b': b}
self.g = {vn: {gn: theano.function(inputs.values(), T.grad(gv+z, vv), name='d'+gn+'_d'+vn,
on_unused_input='ignore') for gn,gv in g} for vn, vv in wrt.iteritems()}

with open('model_exact_A.save' if use_exact_A else 'model.save', 'wb') as file_handle:
print('Saving model...')
sys.setrecursionlimit(2000)
cPickle.dump([self.f, self.g], file_handle, protocol=cPickle.HIGHEST_PROTOCOL)

def get_EPhi(self, X, Z, alpha, beta, mu, Sigma, sf2s, lss, ps, K):
two_over_K = 2.*sf2s[None, None, :]/K # N x K x comp
mean_p, std_p = ps**-1, (2*np.pi*lss)**-1 # Q x comp
Ew = std_p[:, None, :] * mu + mean_p[:, None, :] # Q x K x comp
XBAR = 2 * np.pi * (X[:, :, None, None] - Z[None, :, :, :]) # N x Q x K x comp
decay = T.exp(-0.5 * ((std_p[None, :, None, :] * XBAR)**2 * Sigma[None, :, :, :]).sum(1)) # N x K x comp

cos_w = T.cos(alpha + (XBAR * Ew[None, :, :, :]).sum(1)) # N x K x comp
EPhi = two_over_K**0.5 * decay * cos_w
EPhi = EPhi.flatten(2) # N x K*comp

cos_2w = T.cos(2 * alpha + 2 * (XBAR * Ew[None, :, :, :]).sum(1)) # N x K x comp
E_cos_sq = two_over_K * (0.5 + 0.5*decay**4 * cos_2w) # N x K x comp
EPhiTPhi = (EPhi.T).dot(EPhi)
EPhiTPhi = EPhiTPhi - T.diag(T.diag(EPhiTPhi)) + T.diag(E_cos_sq.sum(0).flatten(1))
return EPhi, EPhiTPhi, E_cos_sq

def get_opt_A(self, tau, EPhiTPhi, YT_EPhi):
SigInv = EPhiTPhi + (tau**-1 + 1e-4) * T.identity_like(EPhiTPhi)
cholTauSigInv = tau**0.5 * sT.cholesky(SigInv)
invCholTauSigInv = sT.matrix_inverse(cholTauSigInv)
tauInvSig = invCholTauSigInv.T.dot(invCholTauSigInv)
Sig_EPhiT_Y = tau * tauInvSig.dot(YT_EPhi.T)
return Sig_EPhiT_Y, tauInvSig, cholTauSigInv

def get_model_exact_A(self, Y, X, Z, alpha, beta, mu, Sigma, m, ls, sf2s, lss, ps, tau, a, b, N, Q, D, K):
Y = Y - (X.dot(a) + b[None,:])
EPhi, EPhiTPhi, E_cos_sq = self.get_EPhi(X, Z, alpha, beta, mu, Sigma, sf2s, lss, ps, K)
YT_EPhi = Y.T.dot(EPhi)

opt_A_mean, opt_A_cov, cholSigInv = self.get_opt_A(tau, EPhiTPhi, YT_EPhi)
LL = (-0.5*N*D * np.log(2 * np.pi) + 0.5*N*D * T.log(tau) - 0.5*tau*T.sum(Y**2)
- 0.5*D * T.sum(2*T.log(T.diag(cholSigInv)))
+ 0.5*tau * T.sum(opt_A_mean.T * YT_EPhi))

KL_w = 0.5 * (Sigma + mu**2 - T.log(Sigma) - 1).sum()

''' For prediction, m is assumed to be [m_1, ..., m_d] with m_i = opt_a_i, and and ls = opt_A_cov '''
Y_pred_mean = EPhi.dot(m) + (X.dot(a) + b[None,:])
EphiTphi = EPhi[:, :, None] * EPhi[:, None, :] # N x K*comp x K*comp
comp = sf2s.shape[0]
EphiTphi = EphiTphi - T.eye(K*comp)[None, :, :] * EphiTphi + T.eye(K*comp)[None, :, :] * E_cos_sq.flatten(2)[:, :, None]
Psi = T.sum(T.sum(EphiTphi * ls[None, :, :], 2), 1) # N
flat_diag_n = E_cos_sq.flatten(2) - EPhi**2 # N x K*comp
Y_pred_var = tau**-1 * T.eye(D) + np.transpose(m.T.dot(flat_diag_n[:, :, None] * m),(1,0,2)) \
+ T.eye(D)[None, :, :] * Psi[:, None, None]

return LL, KL_w, Y_pred_mean, Y_pred_var, EPhi, EPhiTPhi, opt_A_mean, opt_A_cov
[/code]
BlackJack

@Romaxx: Nochmal: Das ist das falsche Modul! Das Modul das Du *als Programm ausführst* ist in erster Linie betroffen. Und auch nur *dort* macht ein Vergleich von `__name__` mit dem Wert '__main__' überhaupt Sinn, denn in allen anderen Modulen kann diese Bedingungen ja *nie* erfüllt sein.
Romaxx
User
Beiträge: 62
Registriert: Donnerstag 26. Januar 2017, 18:53

Hm, ich habe jetzt im Prinzip alle Module geposted. Das eine ist die VSSGP_opt, also

[codebox=python file=Unbenannt.txt] import numpy as np
from vssgp_model import VSSGP
import pylab
import multiprocessing
def extend(x, y, z = {}):
return dict(x.items() + y.items() + z.items())
def eval_f_LL(X, Y, params):
out_f = VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
return out_f
def eval_g_LL(name, X, Y, params):
out_g = VSSGP.f['LL'](**extend({'X': X, 'Y': Y}, params))
return out_g

class VSSGP_opt():
def __init__(self, N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A = False, test_set = {},
parallel = False, batch_size = None, components = None, print_interval = None):
self.vssgp, self.N, self.Q, self.K, self.fixed_params = VSSGP(use_exact_A), N, Q, K, fixed_params
self.use_exact_A, self.parallel, self.batch_size = use_exact_A, parallel, batch_size
self.inputs, self.test_set = inputs, test_set
self.print_interval = 10 if print_interval is None else print_interval
self.opt_param_names = [n for n,_ in opt_params.iteritems()]
opt_param_values = [np.atleast_2d(opt_params[n]) for n in self.opt_param_names]
self.shapes = [v.shape for v in opt_param_values]
self.sizes = [sum([np.prod(x) for x in self.shapes[:i]]) for i in xrange(len(self.shapes)+1)]
self.components = opt_params['lSigma'].shape[2] if components is None else components
self.colours = [np.random.rand(3,1) for c in xrange(self.components)]
self.callback_counter = [0]
if batch_size is not None:
if parallel:
self.pool = multiprocessing.Pool(int(self.N / self.batch_size))
else:
self.params = np.concatenate([v.flatten() for v in opt_param_values])
self.param_updates = np.zeros_like(self.params)
self.moving_mean_squared = np.zeros_like(self.params)
self.learning_rates = 1e-2*np.ones_like(self.params)


def unpack(self, x):
x_param_values = [x[self.sizes[i-1]:self.sizes].reshape(self.shapes[i-1]) for i in xrange(1,len(self.shapes)+1)]
params = {n:v for (n,v) in zip(self.opt_param_names, x_param_values)}
if 'ltau' in params:
params['ltau'] = params['ltau'].squeeze()
return params

def func(self, x):
params = extend(self.fixed_params, self.unpack(x))
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(X[i::splits], Y[i::splits], params) for i in xrange(splits)]
LL = sum(self.pool.map_async(eval_f_LL, arguments).get(9999999))
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
LL = self.N / self.batch_size * self.vssgp.f['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
print(LL)
KL = self.vssgp.f['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
LL, KL = self.vssgp.f['LL'](**params), self.vssgp.f['KL'](**params)
return -(LL - KL)

def fprime(self, x):
grads, params = [], extend(self.fixed_params, self.unpack(x))
for n in self.opt_param_names:
if self.batch_size is not None:
X, Y, splits = self.inputs['X'], self.inputs['Y'], int(self.N / self.batch_size)
if self.parallel:
arguments = [(n, X[i::splits], Y[i::splits], params) for i in xrange(splits)]
dLL = sum(self.pool.map_async(eval_g_LL, arguments).get(9999999))
dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
split = np.random.randint(splits)
dLL = self.N / self.batch_size * self.vssgp.g[n]['LL'](**extend({'X': X[split::splits], 'Y': Y[split::splits]}, params))
dKL = self.vssgp.g[n]['KL'](**extend({'X': [[0]], 'Y': [[0]]}, params))
else:
params = extend(self.inputs, params)
dLL, dKL = self.vssgp.g[n]['LL'](**params), self.vssgp.g[n]['KL'](**params)
grads += [-(dLL - dKL)]
return np.concatenate([grad.flatten() for grad in grads])

def callback(self, x):
if self.callback_counter[0]%self.print_interval == 0:
opt_params = self.unpack(x)
params = extend(self.inputs, self.fixed_params, opt_params)
LL = self.vssgp.f['LL'](**params)
KL = self.vssgp.f['KL'](**params)
print(LL - KL)
self.callback_counter[0] += 1[/code]

vielleicht etwas verwirrend, weil dort die class genauso heisst. Und dann eben vssgp_model:


[codebox=python file=Unbenannt.txt] # To speed Theano up, create ram disk: mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk
# Then use flag THEANO_FLAGS='base_compiledir=/mnt/ramdisk' python script.py
import sys; sys.path.insert(0, "../Theano"); sys.path.insert(0, "../../Theano")
import theano; import theano.tensor as T; import theano.sandbox.linalg as sT
import numpy as np
import cPickle

print('Theano version: ' + theano.__version__ + ', base compile dir: ' + theano.config.base_compiledir)
theano.config.mode = 'FAST_RUN'
theano.config.optimizer = 'fast_run'
theano.config.reoptimize_unpickled_function = False

class VSSGP:
def __init__(self, use_exact_A = False):
try:
print('Trying to load model...')
with open('model_exact_A.save' if use_exact_A else 'model.save', 'rb') as file_handle:
self.f, self.g = cPickle.load(file_handle)
print('Loaded!')
return
except:
print('Failed. Creating a new model...')

print('Setting up variables...')
Z, mu, lSigma = T.dtensor3s('Z', 'mu', 'lSigma')
X, Y, m, ls, lhyp, lalpha, lalpha_delta, a = T.dmatrices('X', 'Y', 'm', 'ls', 'lhyp', 'lalpha', 'lalpha_delta', 'a')
b = T.dvector('b')
ltau = T.dscalar('ltau')
Sigma, alpha, alpha_delta, tau = T.exp(lSigma), T.exp(lalpha), T.exp(lalpha_delta), T.exp(ltau)
alpha = alpha % 2*np.pi
beta = T.minimum(alpha + alpha_delta, 2*np.pi)
(N, Q), D, K = X.shape, Y.shape[1], mu.shape[1]
sf2s, lss, ps = T.exp(lhyp[0]), T.exp(lhyp[1:1+Q]), T.exp(lhyp[1+Q:]) # length-scales abd periods

print('Setting up model...')
LL, KL, Y_pred_mean, Y_pred_var, EPhi, EPhiTPhi, opt_A_mean, opt_A_cov = self.get_model_exact_A(Y, X, Z, alpha, beta, mu, Sigma, m, ls, sf2s, lss, ps, tau, a, b, N, Q, D, K)

print('Compiling model...')
inputs = {'X': X, 'Y': Y, 'Z': Z, 'mu': mu, 'lSigma': lSigma, 'm': m, 'ls': ls, 'lalpha': lalpha,
'lalpha_delta': lalpha_delta, 'lhyp': lhyp, 'ltau': ltau, 'a': a, 'b': b}
z = 0.0*sum([T.sum(v) for v in inputs.values()]) # solve a bug with derivative wrt inputs not in the graph
f = zip(['opt_A_mean', 'opt_A_cov', 'EPhi', 'EPhiTPhi', 'Y_pred_mean', 'Y_pred_var', 'LL', 'KL'],
[opt_A_mean, opt_A_cov, EPhi, EPhiTPhi, Y_pred_mean, Y_pred_var, LL, KL])
self.f = {n: theano.function(inputs.values(), f+z, name=n, on_unused_input='ignore') for n,f in f}
g = zip(['LL', 'KL'], [LL, KL])
wrt = {'Z': Z, 'mu': mu, 'lSigma': lSigma, 'm': m, 'ls': ls, 'lalpha': lalpha,
'lalpha_delta': lalpha_delta, 'lhyp': lhyp, 'ltau': ltau, 'a': a, 'b': b}
self.g = {vn: {gn: theano.function(inputs.values(), T.grad(gv+z, vv), name='d'+gn+'_d'+vn,
on_unused_input='ignore') for gn,gv in g} for vn, vv in wrt.iteritems()}

with open('model_exact_A.save' if use_exact_A else 'model.save', 'wb') as file_handle:
print('Saving model...')
sys.setrecursionlimit(2000)
cPickle.dump([self.f, self.g], file_handle, protocol=cPickle.HIGHEST_PROTOCOL)

def get_EPhi(self, X, Z, alpha, beta, mu, Sigma, sf2s, lss, ps, K):
two_over_K = 2.*sf2s[None, None, :]/K # N x K x comp
mean_p, std_p = ps**-1, (2*np.pi*lss)**-1 # Q x comp
Ew = std_p[:, None, :] * mu + mean_p[:, None, :] # Q x K x comp
XBAR = 2 * np.pi * (X[:, :, None, None] - Z[None, :, :, :]) # N x Q x K x comp
decay = T.exp(-0.5 * ((std_p[None, :, None, :] * XBAR)**2 * Sigma[None, :, :, :]).sum(1)) # N x K x comp

cos_w = T.cos(alpha + (XBAR * Ew[None, :, :, :]).sum(1)) # N x K x comp
EPhi = two_over_K**0.5 * decay * cos_w
EPhi = EPhi.flatten(2) # N x K*comp

cos_2w = T.cos(2 * alpha + 2 * (XBAR * Ew[None, :, :, :]).sum(1)) # N x K x comp
E_cos_sq = two_over_K * (0.5 + 0.5*decay**4 * cos_2w) # N x K x comp
EPhiTPhi = (EPhi.T).dot(EPhi)
EPhiTPhi = EPhiTPhi - T.diag(T.diag(EPhiTPhi)) + T.diag(E_cos_sq.sum(0).flatten(1))
return EPhi, EPhiTPhi, E_cos_sq

def get_opt_A(self, tau, EPhiTPhi, YT_EPhi):
SigInv = EPhiTPhi + (tau**-1 + 1e-4) * T.identity_like(EPhiTPhi)
cholTauSigInv = tau**0.5 * sT.cholesky(SigInv)
invCholTauSigInv = sT.matrix_inverse(cholTauSigInv)
tauInvSig = invCholTauSigInv.T.dot(invCholTauSigInv)
Sig_EPhiT_Y = tau * tauInvSig.dot(YT_EPhi.T)
return Sig_EPhiT_Y, tauInvSig, cholTauSigInv

def get_model_exact_A(self, Y, X, Z, alpha, beta, mu, Sigma, m, ls, sf2s, lss, ps, tau, a, b, N, Q, D, K):
Y = Y - (X.dot(a) + b[None,:])
EPhi, EPhiTPhi, E_cos_sq = self.get_EPhi(X, Z, alpha, beta, mu, Sigma, sf2s, lss, ps, K)
YT_EPhi = Y.T.dot(EPhi)

opt_A_mean, opt_A_cov, cholSigInv = self.get_opt_A(tau, EPhiTPhi, YT_EPhi)
LL = (-0.5*N*D * np.log(2 * np.pi) + 0.5*N*D * T.log(tau) - 0.5*tau*T.sum(Y**2)
- 0.5*D * T.sum(2*T.log(T.diag(cholSigInv)))
+ 0.5*tau * T.sum(opt_A_mean.T * YT_EPhi))

KL_w = 0.5 * (Sigma + mu**2 - T.log(Sigma) - 1).sum()

''' For prediction, m is assumed to be [m_1, ..., m_d] with m_i = opt_a_i, and and ls = opt_A_cov '''
Y_pred_mean = EPhi.dot(m) + (X.dot(a) + b[None,:])
EphiTphi = EPhi[:, :, None] * EPhi[:, None, :] # N x K*comp x K*comp
comp = sf2s.shape[0]
EphiTphi = EphiTphi - T.eye(K*comp)[None, :, :] * EphiTphi + T.eye(K*comp)[None, :, :] * E_cos_sq.flatten(2)[:, :, None]
Psi = T.sum(T.sum(EphiTphi * ls[None, :, :], 2), 1) # N
flat_diag_n = E_cos_sq.flatten(2) - EPhi**2 # N x K*comp
Y_pred_var = tau**-1 * T.eye(D) + np.transpose(m.T.dot(flat_diag_n[:, :, None] * m),(1,0,2)) \
+ T.eye(D)[None, :, :] * Psi[:, None, None]

return LL, KL_w, Y_pred_mean, Y_pred_var, EPhi, EPhiTPhi, opt_A_mean, opt_A_cov[/code]

Alles wird durch

[codebox=python file=Unbenannt.txt] from vssgp_opt import VSSGP_opt
from scipy.optimize import minimize
import numpy as np
from numpy.random import randn, rand
np.set_printoptions(precision=2, suppress=True)
import pylab; pylab.ion() # turn interactive mode on

N, Q, D, K = 1000, 1, 1, 50
components, init_period, init_lengthscales, sf2s, tau = 2, 1e32, 1, np.array([1, 5]), 1

# Some synthetic data to play with
X = rand(N,Q) * 5*np.pi
X = np.sort(X, axis=0)
Z = rand(Q,K,components) * 5*np.pi
#a, b, c, d, e, f = randn(), randn(), randn(), randn(), randn(), randn()
#a, b, c, d, e, f = 0.6, 0.7, -0.6, 0.5, -0.1, -0.8
#a, b, c, d, e, f = -0.6, -0.3, -0.6, 0.6, 0.7, 0.6
#a, b, c, d, e, f = -0.5, -0.3, -0.6, 0.1, 1.1, 0.1
a, b, c, d, e, f = 0.6, -1.8, -0.5, -0.5, 1.7, 0
Y = a*np.sin(b*X+c) + d*np.sin(e*X+f)

# Initialise near the posterior:
mu = randn(Q,K,components)
# TODO: Currently tuned by hand to smallest value that doesn't diverge; we break symmetry to allow for some to get very small while others very large
feature_lengthscale = 5 # features are non-diminishing up to feature_lengthscale / lengthscale from z / lengthscale
lSigma = np.log(randn(Q,K,components)**2 / feature_lengthscale**2) # feature weights are np.exp(-0.5 * (x-z)**2 * Sigma / lengthscale**2)
lalpha = np.log(rand(K,components)*2*np.pi)
lalpha_delta = np.log(rand(K,components) * (2*np.pi - lalpha))
m = randn(components*K,D)
ls = np.zeros((components*K,D)) - 4
lhyp = np.log(1 + 1e-2*randn(2*Q+1, components)) # break symmetry
lhyp[0,:] += np.log(sf2s) # sf2
lhyp[1:Q+1,:] += np.log(init_lengthscales) # length-scales
lhyp[Q+1:,:] += np.log(init_period) # period
ltau = np.log(tau) # precision
lstsq = np.linalg.lstsq(np.hstack([X, np.ones((N,1))]), Y)[0]
a = 0*np.atleast_2d(lstsq[0]) # mean function slope
b = 0*lstsq[1] # mean function intercept

opt_params = {'Z': Z, 'm': m, 'ls': ls, 'mu': mu, 'lSigma': lSigma, 'lhyp': lhyp, 'ltau': ltau}
fixed_params = {'lalpha': lalpha, 'lalpha_delta': lalpha_delta, 'a': a, 'b': b}
inputs = {'X': X, 'Y': Y}
vssgp_opt = VSSGP_opt(N, Q, D, K, inputs, opt_params, fixed_params, use_exact_A=True, parallel = True, batch_size = 25, print_interval=1)

# LBFGS
x0 = np.concatenate([np.atleast_2d(opt_params[n]).flatten() for n in vssgp_opt.opt_param_names])
vssgp_opt.callback(x0)


res = minimize(vssgp_opt.func, x0, method='L-BFGS-B', jac=vssgp_opt.fprime,
options={'ftol': 0, 'disp': False, 'maxiter': 500}, tol=0, callback=vssgp_opt.callback)

[/code]

initialisiert.

Ich habe hier nur nicht die von Theano compilierten Funktion eingefügt, die in vssgp_model compiliert/geladen werde.

Welches Modul ist es denn konkret? Ich verstehe es leider nicht.
BlackJack

Das Modul das als Programm ausgeführt wird. Zum x. Mal. Du führst doch nur eines davon als Programm aus. Das ist das Modul das als Programm ausgeführt wird. Und das muss so umgeändert werden das man es *importieren* kann, *ohne* das *dabei* die ganze Berechnung ausgeführt wird. Das darf nur passieren wenn man es als Programm ausführt. Also: ``python modulname.py`` → tolle Berechnung wird ausgeführt, aber in Python ``import modulname`` → tolle Berechung wird *nicht* ausgeführt. Und die beiden Szenarien kann man an `__name__` in dem Modul erkennen und entsprechend halt auch unterscheiden.
[codebox=text file=Unbenannt.txt]$ cat modul.py
print __name__

if __name__ == '__main__':
print 'Hallo'
$ python modul.py
__main__
Hallo
$ python
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import modul
modul
>>>[/code]
Antworten