# Practice on Toph

Participate in exhilarating programming contests, solve unique algorithm and data structure challenges and be a part of an awesome community.

# Alien Neural Network

The great researcher Dr. Bari loves to do research (Obviously!!). He recently discovered a parallel universe. Now he wants to communicate with the aliens in the parallel universe.

Dr. Bari loves Deep Learning, and suddenly he comes up with a novel architecture of neural network which could be used to communicate with the aliens.

A neural network is a mathematical model that has taken inspiration from the human brain. Neural networks are organized in layers. Usually, a neural network has one input layer, at least one or more hidden layers and an output layer. Each layer has some nodes, that take input from the nodes of the previous layer. These nodes then provide output for the next layer. Inputs are given in the first layer and outputs are produced in the final layer. Each node in any of the layers produces its output from its input. Nodes use a function called activation function to produce this output.

In Figure 1, a typical neural network architecture with 2 hidden layers is shown. Here, all the nodes (the circles) of a layer are connected with all the nodes in the next layer. This type of layers in a neural network are called fully connected layer. Also, since it has **no cycles**, it is called as Feedforward Neural Network.

Dr. Bari will give his input text to the neural network, the neural network will then translate his input text to the language of the aliens. Since Dr. Bari is using this feed forward neural network to communicate with aliens in another parallel universe, he deployed his model in a server located in the parallel universe. And thus, each edge in the neural network is replaced by the whole network and his usual neural network becomes an “alien neural network”.

As each edge of the network is replaced by the whole network, Dr. Bari has decided to design his neural network with less complexity (see Figure 2). Thus, in the input layer, there will be only one node. Also, in the hidden layers, all the nodes of the previous layer **may not be connected** with all the nodes in the current layer. There can have some skip connections too. Which means some nodes of a layer **(L = l)** may not have connection with the nodes of the immediate next layer **(L = l+1)**, rather those nodes **(L = l)** can have direct connections with nodes of any of the next layers **(L > l+1)**.

In this problem, you don’t need to calculate the activation function. Since the contest is just five hours long, you cannot train a neural network model within a very short time! Dr. Bari has assigned you to help him in his research. He does not want to lose any information. So you need to calculate the maximum amount of information that can flow from the input node to the output node in the “alien neural net”.

Dr. Bari defined his Alien Neural Network formally as follows:

Let, **G _{0} = (V_{0}, E_{0})** be a directed graph with the set of vertices,

**V**and set of edges,

_{0}**E**. Each edge in the set

_{0}**E**has a capacity of information flow associated with it. Two nodes

_{0}**s, t ∈ V**are also defined as

_{0}**input**and

**output node**respectively. So,

**E**. Now,

_{0}∈ (V_{0}x V_{0}x ℕ)**first extension of G**is a graph

_{0}**G**defined as follows:

_{1}For each edge **(u, v, c) ∈ E _{0}**:
remove the edge and insert the whole graph

**G**considering

_{0}**u as s and v as t**.

So, we can define **i ^{th} extension** of

**G**as

_{0}**first extension of G**,

_{i-1}**G**as following:

_{i}For each edge **(u, v, c) ∈ E _{i-1}**:
remove the edge and insert the whole graph

**G**considering

_{i-1}**u as s and v as t**.

Now, as you can see, we are increasing the number of nodes and edges on each transformation, but the nodes from the previous graph are always preserved. Thus, we say that input and output node defined for graph **G _{0}** is preserved in any following transformation

**i > 0**.

Given the graph **G _{0}** as a set of edges and the input and output nodes and an integer

**N**, calculate the maximum amount of information flow of the graph

**G**. To better understand this graph transformation, check the notes section.

_{N}## Input

Input begins with the integer number of test cases to be followed, **T (0 < T < 101)**.
Each test case will begin with **five** integers representing the number of nodes in **G _{0}**,

**|V**, number of edges in

_{0}| (2 ≤ |V_{0}| ≤ 1000)**G**,

_{0}**|E**, the number

_{0}| (1 ≤ |E_{0}| ≤ 10000)**N**for

**N**extension

^{th}**G**, and finally two integer

_{N}(0 ≤ N ≤ 10^{16})**s**and

**t**for input and output node

**(0 ≤ s, t < |V**. Then

_{0}|)**|E**line follows. Each of the next

_{0}|**|E**lines containing three integers

_{0}|**u, v, c (0 ≤ u, v < |V**.

_{0}| and 0 < c ≤ 10^{9})## Output

For each case print, the case number and the maximum amount of information flow of the graph **G _{N}** as described in the problem modulo

**10**.

^{9}+7## Samples

Input | Output |
---|---|

3 3 2 100 0 2 0 1 10 1 2 20 4 4 0 0 3 0 1 2 1 3 3 0 2 2 2 3 3 3 3 50 0 2 0 2 3 0 1 4 1 2 4 | Case 1: 10 Case 2: 4 Case 3: 384884644 |

Let, G_{0} be:

here, let’s consider (s, t) = (0, 1)

So, G_{1} is:

Another Example:

Let G_{0} be:

Considering (s, t) = (0,2)

Then, G_{1} will be:

Yet another example:

Let G_{0} be:

Considering (s, t) = (0, 2)

G_{1} will be:

#### jackal_1586

→