TinyChain Platform

Searchâ€¦

Guides

Fundamentals

Advanced

Code a neural net

This tutorial introduces TinyChain's Tensor data structure. The TinyChain Python client has a built-in multi-layer neural network available in the tinychain.ml.dnn package.

These examples assume that you have a TinyChain host running locally on port 8702. If you're not sure how to do this, follow the instructions in on the Getting Started page. If you run into any problems, please Ask for help!

Single-layer Neural Net

One of TinyChain's most outstanding features is its seamless integration of machine learning and data science capabilities into a general-purpose cloud application framework. For example, try multiplying two matrices:

1

import tinychain as tc

2

â€‹

3

HOST = tc.host.Host("http://127.0.0.1:8702")

4

ENDPOINT = "/transact/hypothetical"

5

â€‹

6

# this can't be a GET Op because it accepts non-scalar arguments

7

# Tensor is a subclass of Collection, which is mutable and has no fixed size

8

@tc.post_op

9

def matmul(a: tc.tensor.Tensor, b: tc.tensor.Tensor) -> tc.tensor.Tensor:

10

return tc.tensor.einsum("ij,jk->ik", [a, b])

11

â€‹

12

if __name__ == "__main__":

13

cxt = tc.Context()

14

cxt.matmul = matmul

15

cxt.l = tc.tensor.Dense.load([2, 3], tc.I32, list(range(6)))

16

cxt.r = tc.tensor.Dense.load([3, 4], tc.I32, list(range(12)))

17

â€‹

18

# note: a POST Op requires named arguments

19

# these can be keyword arguments, but you can also pass a Map

20

# i.e. matmul(tc.Map(a=cxt.l, b=cxt.r))

21

cxt.product = cxt.matmul(a=cxt.l, b=cxt.r)

22

â€‹

23

response = HOST.post(ENDPOINT, cxt)

24

print(response)

25

Copied!

Now, from this starting point, let's try implementing a neural network. Here's an example of evaluating a single layer of a neural network with NumPy:

1

import numpy as np

2

â€‹

3

# define the shape of the layer

4

input_size = 2

5

output_size = 1

6

â€‹

7

# define the sigmoid activation function

8

# learn more at https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6

9

sigmoid = lambda x: 1 / (1 + np.exp(-x))

10

â€‹

11

# define the weights and bias of the layer itself

12

weights = np.random.random([input_size, output_size])

13

bias = np.random.random([output_size])

14

â€‹

15

# define evaluation of the layer with respect to a given input

16

evaluate = lambda input: sigmoid(np.matmul(input, weights) + bias)

17

â€‹

18

if __name__ == "__main__":

19

# check that our implementation works as expected

20

print(evaluate(np.random.random([1, input_size])))

21

Copied!

Now, let's implement this same functionality using TinyChain. Because TinyChain supports blockchain, all TinyChain functions must be idempotent, meaning that TinyChain does not have any built-in way to generate random numbers. So, we'll have to continue using NumPy as a source of randomness.

1

import numpy as np

2

import tinychain as tc

3

â€‹

4

HOST = tc.host.Host("http://127.0.0.1:8702")

5

ENDPOINT = "/transact/hypothetical"

6

â€‹

7

# define the shape of the layer

8

input_size = 2

9

output_size = 1

10

â€‹

11

# initialize a new execution context

12

cxt = tc.Context()

13

â€‹

14

# define the sigmoid activation function

15

# learn more at https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6

16

@tc.post_op

17

def sigmoid(x: tc.tensor.Tensor) -> tc.tensor.Tensor:

18

return 1 / (1 + (-x).exp())

19

â€‹

20

cxt.sigmoid = sigmoid

21

â€‹

22

# define the weights and bias of the layer itself

23

weights = np.random.random([input_size, output_size])

24

cxt.weights = tc.tensor.Dense.load(weights.shape, tc.F32, weights.flatten().tolist())

25

â€‹

26

bias = np.random.random([output_size])

27

cxt.bias = tc.tensor.Dense.load(bias.shape, tc.F32, bias.flatten().tolist())

28

â€‹

29

# define evaluating the layer with respect to a given input

30

# the @tc.closure annotation tells TinyChain to capture the given states

31

# from the outer context

32

@tc.closure(cxt.bias, cxt.weights, cxt.sigmoid)

33

@tc.post_op

34

def evaluate(input: tc.tensor.Tensor) -> tc.tensor.Tensor:

35

activation = tc.tensor.einsum("ij,jk->ik", [cxt.weights, input]) + cxt.bias

36

return cxt.sigmoid(x=activation) # remember: a POST op requires named arguments!

37

â€‹

38

cxt.evaluate = evaluate

39

â€‹

40

if __name__ == "__main__":

41

# check that our implementation works as expected

42

cxt.inputs = tc.tensor.Dense.load([1, input_size], tc.F32, [1, 2])

43

cxt.result = cxt.evaluate(input=cxt.inputs)

44

print(HOST.post(ENDPOINT, cxt))

45

Copied!

This works, but it's not as clear or as useful as it could be. We need to separate the evaluation context from the input context, and ideally improve the code readability by grouping related data and methods into classes.

Since we're not making any methods here accessible over the network, we don't need

`@tc.post_method`

or type annotations.The (dis)advantage of this approach is that our

`evaluate`

method can only be called by other Python code using this library.You can think of this as loosely analogous to "protected" visibility in single-machine OOP. For an example of "public" visibility, where the class definition itself is accessible over the network and its methods are callable via HTTP, see the next section on Object orientation.

We'll start by defining a class called

`Layer`

:1

# since the data that comprises a layer is a fixed-length sequence, we'll subclass tc.Tuple

2

# but in general the most flexible native TinyChain class to subclass is tc.Map

3

class Layer(tc.Tuple):

4

# We use Python's built-in @property to provide getter methods for the weights and bias.

5

@property

6

def weights(self):

7

return tc.tensor.Dense(self[0])

8

â€‹

9

@property

10

def bias(self):

11

return tc.tensor.Dense(self[1])

12

â€‹

13

def evaluate(self, input):

14

sigmoid = lambda x: 1 / (1 + (-x).exp())

15

activation = tc.tensor.einsum("ij,jk->ik", [self.weights, input]) + self.bias

16

return sigmoid(activation)

Copied!

`__init__`

method in this class! In general you should not use `__init__`

in a TinyChain class. This is because TinyChain is a For example,

`tc.Number(1)`

is a valid way of initializing a `Number`

, but so is `tc.Number(tc.URI("$x"))`

and so is `tc.Number(tc.URI("http://example.com/myapp/numeric_constant"))`

.Now, code in a different module or a different library altogether can easily initialize and call a

`Layer`

:1

import numpy as np

2

import tinychain as tc

3

â€‹

4

HOST = tc.host.Host("http://127.0.0.1:8702")

5

ENDPOINT = "/transact/hypothetical"

6

â€‹

7

# ...

8

â€‹

9

if __name__ == "__main__":

10

# define the shape of the layer

11

input_size = 2

12

output_size = 1

13

â€‹

14

cxt = tc.Context()

15

weights = np.random.random([input_size, output_size])

16

cxt.weights = tc.tensor.Dense.load(weights.shape, tc.F32, weights.flatten().tolist())

17

â€‹

18

bias = np.random.random([output_size])

19

cxt.bias = tc.tensor.Dense.load(bias.shape, tc.F32, bias.flatten().tolist())

20

â€‹

21

# initialize the Layer

22

cxt.layer = Layer([cxt.weights, cxt.bias])

23

â€‹

24

# check that our implementation works as expected

25

cxt.inputs = tc.tensor.Dense.load([1, input_size], tc.F32, [1, 2])

26

cxt.result = cxt.layer.evaluate(input=cxt.inputs)

27

print(HOST.post(ENDPOINT, cxt))

28

Copied!

Last modified 4mo ago

Copy link

Contents

Single-layer Neural Net