Generating Virtual Immunofluorescence (IF) Images

Clone the repository for pix2pix and CycleGAN. Install python package requirements.

Code
!git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
%cd pytorch-CycleGAN-and-pix2pix
!pip install -r requirements.txt
Cloning into 'pytorch-CycleGAN-and-pix2pix'...
remote: Enumerating objects: 2516, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 2516 (delta 0), reused 2 (delta 0), pack-reused 2513
Receiving objects: 100% (2516/2516), 8.20 MiB | 21.70 MiB/s, done.
Resolving deltas: 100% (1575/1575), done.
/content/pytorch-CycleGAN-and-pix2pix
Requirement already satisfied: torch>=1.4.0 in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 1)) (2.2.1+cu121)
Requirement already satisfied: torchvision>=0.5.0 in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 2)) (0.17.1+cu121)
Collecting dominate>=2.4.0 (from -r requirements.txt (line 3))
  Downloading dominate-2.9.1-py2.py3-none-any.whl (29 kB)
Collecting visdom>=0.1.8.8 (from -r requirements.txt (line 4))
  Downloading visdom-0.2.4.tar.gz (1.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4/1.4 MB 6.0 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting wandb (from -r requirements.txt (line 5))
  Downloading wandb-0.16.6-py3-none-any.whl (2.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 7.7 MB/s eta 0:00:00
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.4.0->-r requirements.txt (line 1)) (3.13.4)
Requirement already satisfied: typing-extensions>=4.8.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.4.0->-r requirements.txt (line 1)) (4.11.0)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.4.0->-r requirements.txt (line 1)) (1.12)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.4.0->-r requirements.txt (line 1)) (3.3)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.4.0->-r requirements.txt (line 1)) (3.1.3)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch>=1.4.0->-r requirements.txt (line 1)) (2023.6.0)
Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)
Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)
Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)
Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)
Collecting nvidia-cublas-cu12==12.1.3.1 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)
Collecting nvidia-cufft-cu12==11.0.2.54 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)
Collecting nvidia-curand-cu12==10.3.2.106 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)
Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)
Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)
Collecting nvidia-nccl-cu12==2.19.3 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_nccl_cu12-2.19.3-py3-none-manylinux1_x86_64.whl (166.0 MB)
Collecting nvidia-nvtx-cu12==12.1.105 (from torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
Requirement already satisfied: triton==2.2.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.4.0->-r requirements.txt (line 1)) (2.2.0)
Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch>=1.4.0->-r requirements.txt (line 1))
  Using cached nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (21.1 MB)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from torchvision>=0.5.0->-r requirements.txt (line 2)) (1.25.2)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.10/dist-packages (from torchvision>=0.5.0->-r requirements.txt (line 2)) (9.4.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from visdom>=0.1.8.8->-r requirements.txt (line 4)) (1.11.4)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from visdom>=0.1.8.8->-r requirements.txt (line 4)) (2.31.0)
Requirement already satisfied: tornado in /usr/local/lib/python3.10/dist-packages (from visdom>=0.1.8.8->-r requirements.txt (line 4)) (6.3.3)
Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from visdom>=0.1.8.8->-r requirements.txt (line 4)) (1.16.0)
Collecting jsonpatch (from visdom>=0.1.8.8->-r requirements.txt (line 4))
  Downloading jsonpatch-1.33-py2.py3-none-any.whl (12 kB)
Requirement already satisfied: websocket-client in /usr/local/lib/python3.10/dist-packages (from visdom>=0.1.8.8->-r requirements.txt (line 4)) (1.7.0)
Requirement already satisfied: Click!=8.0.0,>=7.1 in /usr/local/lib/python3.10/dist-packages (from wandb->-r requirements.txt (line 5)) (8.1.7)
Collecting GitPython!=3.1.29,>=1.0.0 (from wandb->-r requirements.txt (line 5))
  Downloading GitPython-3.1.43-py3-none-any.whl (207 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 207.3/207.3 kB 9.5 MB/s eta 0:00:00
Requirement already satisfied: psutil>=5.0.0 in /usr/local/lib/python3.10/dist-packages (from wandb->-r requirements.txt (line 5)) (5.9.5)
Collecting sentry-sdk>=1.0.0 (from wandb->-r requirements.txt (line 5))
  Downloading sentry_sdk-1.45.0-py2.py3-none-any.whl (267 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 267.1/267.1 kB 9.7 MB/s eta 0:00:00
Collecting docker-pycreds>=0.4.0 (from wandb->-r requirements.txt (line 5))
  Downloading docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.10/dist-packages (from wandb->-r requirements.txt (line 5)) (6.0.1)
Collecting setproctitle (from wandb->-r requirements.txt (line 5))
  Downloading setproctitle-1.3.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (30 kB)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from wandb->-r requirements.txt (line 5)) (67.7.2)
Requirement already satisfied: appdirs>=1.4.3 in /usr/local/lib/python3.10/dist-packages (from wandb->-r requirements.txt (line 5)) (1.4.4)
Requirement already satisfied: protobuf!=4.21.0,<5,>=3.19.0 in /usr/local/lib/python3.10/dist-packages (from wandb->-r requirements.txt (line 5)) (3.20.3)
Collecting gitdb<5,>=4.0.1 (from GitPython!=3.1.29,>=1.0.0->wandb->-r requirements.txt (line 5))
  Downloading gitdb-4.0.11-py3-none-any.whl (62 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.7/62.7 kB 6.5 MB/s eta 0:00:00
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->visdom>=0.1.8.8->-r requirements.txt (line 4)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->visdom>=0.1.8.8->-r requirements.txt (line 4)) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->visdom>=0.1.8.8->-r requirements.txt (line 4)) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->visdom>=0.1.8.8->-r requirements.txt (line 4)) (2024.2.2)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.4.0->-r requirements.txt (line 1)) (2.1.5)
Collecting jsonpointer>=1.9 (from jsonpatch->visdom>=0.1.8.8->-r requirements.txt (line 4))
  Downloading jsonpointer-2.4-py2.py3-none-any.whl (7.8 kB)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.4.0->-r requirements.txt (line 1)) (1.3.0)
Collecting smmap<6,>=3.0.1 (from gitdb<5,>=4.0.1->GitPython!=3.1.29,>=1.0.0->wandb->-r requirements.txt (line 5))
  Downloading smmap-5.0.1-py3-none-any.whl (24 kB)
Building wheels for collected packages: visdom
  Building wheel for visdom (setup.py) ... done
  Created wheel for visdom: filename=visdom-0.2.4-py3-none-any.whl size=1408195 sha256=7f337ffa8b63b556a37fc615c3085d5c7838c216d7d5f442c46c4d6867c5a529
  Stored in directory: /root/.cache/pip/wheels/42/29/49/5bed207bac4578e4d2c0c5fc0226bfd33a7e2953ea56356855
Successfully built visdom
Installing collected packages: smmap, setproctitle, sentry-sdk, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, jsonpointer, dominate, docker-pycreds, nvidia-cusparse-cu12, nvidia-cudnn-cu12, jsonpatch, gitdb, visdom, nvidia-cusolver-cu12, GitPython, wandb
Successfully installed GitPython-3.1.43 docker-pycreds-0.4.0 dominate-2.9.1 gitdb-4.0.11 jsonpatch-1.33 jsonpointer-2.4 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.19.3 nvidia-nvjitlink-cu12-12.4.127 nvidia-nvtx-cu12-12.1.105 sentry-sdk-1.45.0 setproctitle-1.3.3 smmap-5.0.1 visdom-0.2.4 wandb-0.16.6

Download Dataset

Download paired histology patches and corresponding masks here: https://www.dropbox.com/s/tjx7sbx7f5vaqom/new_seq_data_subset.zip?dl=0. Files should be uploaded and unzipped in ./datasets/new_seq_data and there should be a ‘test’ and ‘train’. Input and ground truth images are stiched together.

You can verify that the code blocks below by looking into the following directory using the file navigator on the left: /pytorch-CycleGAN-and-pix2pix/datasets/.

Code
!curl -L -o "/content/pytorch-CycleGAN-and-pix2pix/datasets/new_seq_data.zip" https://www.dropbox.com/s/tjx7sbx7f5vaqom/new_seq_data_subset.zip?dl=0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    69    0    69    0     0    274      0 --:--:-- --:--:-- --:--:--   274
100   318  100   318    0     0    494      0 --:--:-- --:--:-- --:--:--     0
100   491    0   491    0     0    378      0 --:--:--  0:00:01 --:--:--   378
100 91.6M  100 91.6M    0     0  33.4M      0  0:00:02  0:00:02 --:--:-- 99.1M
Code
from zipfile import ZipFile

with ZipFile('/content/pytorch-CycleGAN-and-pix2pix/datasets/new_seq_data.zip', 'r') as zipObj:
  zipObj.extractall('/content/pytorch-CycleGAN-and-pix2pix/datasets/')

Training

Use the following command to train your model, see the following arguments:

`–dataroot’ -> path to the folder with ‘test’ and ‘train’

`–n_epochs’ -> # of epochs with constant learning rate

`–n_epochs_decay’ -> # of epochs with constant learning rate decaying to zero

`–checkpoints_dir’ -> path for storing trained model checkpoints

Train the model for 5 epochs at learning rate 0.002 decaying to zero for an additional 5 epochs.

Code
!python train.py --dataroot ./datasets/new_seq_data/ --name nuclei --model pix2pix --direction AtoB --n_epochs 5 --n_epochs_decay 1 --save_epoch_freq 1 --checkpoints_dir main --lr 0.002
----------------- Options ---------------
               batch_size: 1                             
                    beta1: 0.5                           
          checkpoints_dir: main                             [default: ./checkpoints]
           continue_train: False                         
                crop_size: 256                           
                 dataroot: ./datasets/new_seq_data/         [default: None]
             dataset_mode: aligned                       
                direction: AtoB                          
              display_env: main                          
             display_freq: 400                           
               display_id: 1                             
            display_ncols: 4                             
             display_port: 8097                          
           display_server: http://localhost              
          display_winsize: 256                           
                    epoch: latest                        
              epoch_count: 1                             
                 gan_mode: vanilla                       
                  gpu_ids: 0                             
                init_gain: 0.02                          
                init_type: normal                        
                 input_nc: 3                             
                  isTrain: True                             [default: None]
                lambda_L1: 100.0                         
                load_iter: 0                                [default: 0]
                load_size: 286                           
                       lr: 0.002                            [default: 0.0002]
           lr_decay_iters: 50                            
                lr_policy: linear                        
         max_dataset_size: inf                           
                    model: pix2pix                          [default: cycle_gan]
                 n_epochs: 5                                [default: 100]
           n_epochs_decay: 1                                [default: 100]
               n_layers_D: 3                             
                     name: nuclei                           [default: experiment_name]
                      ndf: 64                            
                     netD: basic                         
                     netG: unet_256                      
                      ngf: 64                            
               no_dropout: False                         
                  no_flip: False                         
                  no_html: False                         
                     norm: batch                         
              num_threads: 4                             
                output_nc: 3                             
                    phase: train                         
                pool_size: 0                             
               preprocess: resize_and_crop               
               print_freq: 100                           
             save_by_iter: False                         
          save_epoch_freq: 1                                [default: 5]
         save_latest_freq: 5000                          
           serial_batches: False                         
                   suffix:                               
         update_html_freq: 1000                          
                use_wandb: False                         
                  verbose: False                         
       wandb_project_name: CycleGAN-and-pix2pix          
----------------- End -------------------
dataset [AlignedDataset] was created
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:558: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(
The number of training images = 427
initialize network with normal
initialize network with normal
model [Pix2PixModel] was created
---------- Networks initialized -------------
[Network G] Total number of parameters : 54.414 M
[Network D] Total number of parameters : 2.769 M
-----------------------------------------------
Setting up a new session...
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn
    sock = connection.create_connection(
  File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection
    raise err
  File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen
    response = self._make_request(
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request
    conn.request(
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request
    self.endheaders()
  File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output
    self.send(msg)
  File "/usr/lib/python3.10/http/client.py", line 976, in send
    self.connect()
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect
    self.sock = self._new_conn()
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7d056fe632b0>: Failed to establish a new connection: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7d056fe632b0>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/visdom/__init__.py", line 756, in _send
    return self._handle_post(
  File "/usr/local/lib/python3.10/dist-packages/visdom/__init__.py", line 720, in _handle_post
    r = self.session.post(url, data=data)
  File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 637, in post
    return self.request("POST", url, data=data, json=json, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7d056fe632b0>: Failed to establish a new connection: [Errno 111] Connection refused'))
[Errno 99] Cannot assign requested address
Visdom.setup_socket.<locals>.on_close() takes 1 positional argument but 3 were given
[Errno 99] Cannot assign requested address
Visdom.setup_socket.<locals>.on_close() takes 1 positional argument but 3 were given
[Errno 99] Cannot assign requested address
Visdom.setup_socket.<locals>.on_close() takes 1 positional argument but 3 were given
Visdom python client failed to establish socket to get messages from the server. This feature is optional and can be disabled by initializing Visdom with `use_incoming_socket=False`, which will prevent waiting for this request to timeout.


Could not connect to Visdom server. 
 Trying to start a server....
Command: /usr/bin/python3 -m visdom.server -p 8097 &>/dev/null &
create web directory main/nuclei/web...
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
learning rate 0.0020000 -> 0.0020000
/usr/lib/python3.10/multiprocessing/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
  self.pid = os.fork()
[Errno 99] Cannot assign requested address
Visdom.setup_socket.<locals>.on_close() takes 1 positional argument but 3 were given
[Errno 99] Cannot assign requested address
Visdom.setup_socket.<locals>.on_close() takes 1 positional argument but 3 were given
(epoch: 1, iters: 100, time: 0.087, data: 0.249) G_GAN: 0.738 G_L1: 3.482 D_real: 0.690 D_fake: 0.662 
(epoch: 1, iters: 200, time: 0.068, data: 0.002) G_GAN: 0.699 G_L1: 5.516 D_real: 0.574 D_fake: 0.695 
(epoch: 1, iters: 300, time: 0.089, data: 0.002) G_GAN: 1.102 G_L1: 6.930 D_real: 0.115 D_fake: 0.785 
(epoch: 1, iters: 400, time: 0.305, data: 0.002) G_GAN: 1.636 G_L1: 9.888 D_real: 0.270 D_fake: 0.227 
saving the model at the end of epoch 1, iters 427
End of epoch 1 / 6   Time Taken: 35 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 2, iters: 73, time: 0.089, data: 0.010) G_GAN: 0.861 G_L1: 4.057 D_real: 0.531 D_fake: 0.354 
(epoch: 2, iters: 173, time: 0.088, data: 0.003) G_GAN: 0.942 G_L1: 5.568 D_real: 0.434 D_fake: 0.623 
(epoch: 2, iters: 273, time: 0.083, data: 0.002) G_GAN: 1.531 G_L1: 8.191 D_real: 1.004 D_fake: 0.359 
(epoch: 2, iters: 373, time: 0.239, data: 0.002) G_GAN: 1.135 G_L1: 3.889 D_real: 1.267 D_fake: 0.337 
saving the model at the end of epoch 2, iters 854
End of epoch 2 / 6   Time Taken: 27 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 3, iters: 46, time: 0.076, data: 0.002) G_GAN: 1.094 G_L1: 10.222 D_real: 0.465 D_fake: 0.448 
(epoch: 3, iters: 146, time: 0.091, data: 0.014) G_GAN: 1.178 G_L1: 9.276 D_real: 0.228 D_fake: 0.305 
(epoch: 3, iters: 246, time: 0.093, data: 0.002) G_GAN: 0.916 G_L1: 4.148 D_real: 1.107 D_fake: 0.421 
(epoch: 3, iters: 346, time: 0.250, data: 0.009) G_GAN: 0.846 G_L1: 10.845 D_real: 0.009 D_fake: 1.021 
saving the model at the end of epoch 3, iters 1281
End of epoch 3 / 6   Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 4, iters: 19, time: 0.085, data: 0.002) G_GAN: 1.100 G_L1: 5.776 D_real: 0.300 D_fake: 0.450 
(epoch: 4, iters: 119, time: 0.095, data: 0.008) G_GAN: 1.055 G_L1: 7.001 D_real: 0.082 D_fake: 0.616 
(epoch: 4, iters: 219, time: 0.095, data: 0.002) G_GAN: 0.775 G_L1: 4.417 D_real: 0.619 D_fake: 0.570 
(epoch: 4, iters: 319, time: 0.291, data: 0.002) G_GAN: 0.980 G_L1: 8.942 D_real: 0.211 D_fake: 0.517 
(epoch: 4, iters: 419, time: 0.089, data: 0.002) G_GAN: 1.261 G_L1: 6.908 D_real: 0.114 D_fake: 0.388 
saving the model at the end of epoch 4, iters 1708
End of epoch 4 / 6   Time Taken: 26 sec
learning rate 0.0020000 -> 0.0010000
(epoch: 5, iters: 92, time: 0.097, data: 0.002) G_GAN: 1.132 G_L1: 6.358 D_real: 0.251 D_fake: 0.503 
(epoch: 5, iters: 192, time: 0.097, data: 0.002) G_GAN: 0.637 G_L1: 2.030 D_real: 0.758 D_fake: 1.284 
(epoch: 5, iters: 292, time: 0.273, data: 0.003) G_GAN: 1.342 G_L1: 8.566 D_real: 0.125 D_fake: 0.429 
(epoch: 5, iters: 392, time: 0.098, data: 0.002) G_GAN: 0.817 G_L1: 4.181 D_real: 0.657 D_fake: 0.555 
saving the model at the end of epoch 5, iters 2135
End of epoch 5 / 6   Time Taken: 30 sec
learning rate 0.0010000 -> 0.0000000
(epoch: 6, iters: 65, time: 0.095, data: 0.002) G_GAN: 1.054 G_L1: 2.947 D_real: 0.902 D_fake: 0.442 
(epoch: 6, iters: 165, time: 0.095, data: 0.002) G_GAN: 1.072 G_L1: 3.825 D_real: 0.798 D_fake: 0.440 
(epoch: 6, iters: 265, time: 0.267, data: 0.002) G_GAN: 1.177 G_L1: 11.639 D_real: 0.009 D_fake: 0.392 
(epoch: 6, iters: 365, time: 0.093, data: 0.002) G_GAN: 1.092 G_L1: 10.413 D_real: 0.009 D_fake: 0.430 
saving the model at the end of epoch 6, iters 2562
End of epoch 6 / 6   Time Taken: 26 sec

Testing

Use the following command to test your trained model.

Code
!python test.py --dataroot ./datasets/new_seq_data/ --direction AtoB --model pix2pix --name nuclei --checkpoints_dir main
----------------- Options ---------------
             aspect_ratio: 1.0                           
               batch_size: 1                             
          checkpoints_dir: main                             [default: ./checkpoints]
                crop_size: 256                           
                 dataroot: ./datasets/new_seq_data/         [default: None]
             dataset_mode: aligned                       
                direction: AtoB                          
          display_winsize: 256                           
                    epoch: latest                        
                     eval: False                         
                  gpu_ids: 0                             
                init_gain: 0.02                          
                init_type: normal                        
                 input_nc: 3                             
                  isTrain: False                            [default: None]
                load_iter: 0                                [default: 0]
                load_size: 256                           
         max_dataset_size: inf                           
                    model: pix2pix                          [default: test]
               n_layers_D: 3                             
                     name: nuclei                           [default: experiment_name]
                      ndf: 64                            
                     netD: basic                         
                     netG: unet_256                      
                      ngf: 64                            
               no_dropout: False                         
                  no_flip: False                         
                     norm: batch                         
                 num_test: 50                            
              num_threads: 4                             
                output_nc: 3                             
                    phase: test                          
               preprocess: resize_and_crop               
              results_dir: ./results/                    
           serial_batches: False                         
                   suffix:                               
                use_wandb: False                         
                  verbose: False                         
       wandb_project_name: CycleGAN-and-pix2pix          
----------------- End -------------------
dataset [AlignedDataset] was created
initialize network with normal
model [Pix2PixModel] was created
loading the model from main/nuclei/latest_net_G.pth
---------- Networks initialized -------------
[Network G] Total number of parameters : 54.414 M
-----------------------------------------------
creating web directory ./results/nuclei/test_latest
processing (0000)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_3_10.png']
processing (0005)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_3_15.png']
processing (0010)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_3_5.png']
processing (0015)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_4_10.png']
processing (0020)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_4_16.png']
processing (0025)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_4_6.png']
processing (0030)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_5_11.png']
processing (0035)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_5_16.png']
processing (0040)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_5_5.png']
processing (0045)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_6_10.png']

Visualize

Visulize predictions (fake), histology (real_A) and ground truth (real_B) using the following commands.

Code
import matplotlib.pyplot as plt
import numpy as np

img = plt.imread('/content/pytorch-CycleGAN-and-pix2pix/results/nuclei/test_latest/images/A1_registered_HEnorm_crop_1_3_11_fake_B.png')
plt.imshow(img)

Code
img = plt.imread('/content/pytorch-CycleGAN-and-pix2pix/results/nuclei/test_latest/images/A1_registered_HEnorm_crop_1_3_11_real_A.png')
plt.imshow(img)

Code
img = plt.imread('/content/pytorch-CycleGAN-and-pix2pix/results/nuclei/test_latest/images/A1_registered_HEnorm_crop_1_3_11_real_B.png')
plt.imshow(img)

Code
# A Helper Class to Help you iterate across the images.
from typing import List

class DataIterator():
  def __init__(self):
    self.path = '/content/pytorch-CycleGAN-and-pix2pix/results/nuclei/test_latest/images/'
    self.pred_suffix = "fake_B.png"
    self.truth_suffix = "real_B.png"
    from glob import glob
    files = glob(f"{self.path}*")
    self.truth_files = sorted(list(filter(lambda x: x.endswith(self.truth_suffix), files)))
    self.pred_files = sorted(list(filter(lambda x: x.endswith(self.pred_suffix), files)))
    self.i = 0
  def __iter__(self):
    return self
  def __next__(self):
    import matplotlib.pyplot as plt
    if self.i < len(self.truth_files):
      real = plt.imread(self.truth_files[self.i])
      pred = plt.imread(self.pred_files[self.i])
      self.i += 1
      return pred, real
    raise StopIteration()

Question 1 Answer

Code
from skimage.metrics import structural_similarity as compare_ssim
from scipy.ndimage import gaussian_filter

def metric(pred: np.array, ground_truth: np.array) -> float:
    """
    Write a function here that implements the similarity score used in the SHIFT
    Paper
    """
    # Convert images to grayscale
    pred_gray = np.mean(pred, axis=-1)
    ground_truth_gray = np.mean(ground_truth, axis=-1)

    # Compute SSIM for each overlapping window of size 11
    ssim_scores = []
    for i in range(len(pred_gray) - 10):
        for j in range(len(pred_gray[0]) - 10):
            window_pred = pred_gray[i:i+11, j:j+11]
            window_gt = ground_truth_gray[i:i+11, j:j+11]
            window_pred_filter = gaussian_filter(window_pred, sigma=3)
            window_gt_filter = gaussian_filter(window_gt, sigma=3)
            ssim_score = compare_ssim(window_pred_filter, window_gt_filter)
            ssim_scores.append(ssim_score)

    # Take the average of SSIM scores
    average_ssim = np.mean(ssim_scores)
    return average_ssim

pred = plt.imread('/content/pytorch-CycleGAN-and-pix2pix/results/nuclei/test_latest/images/A1_registered_HEnorm_crop_1_3_11_fake_B.png')
ground_truth = plt.imread('/content/pytorch-CycleGAN-and-pix2pix/results/nuclei/test_latest/images/A1_registered_HEnorm_crop_1_3_11_real_B.png')

metric(pred, ground_truth)
0.6975167690362379

(b) Comment of the advantages and disadvantages of using this metric.

While SSIM offers advantages such as sensitivity to perceptual changes and interpretability, it also has limitations in terms of computational complexity, parameter sensitivity, and effectiveness for certain types of distortions.

(c) Use the metric to calculate the average performance of the model across all test images

Code
def calculate_error_score():
  """
  Write a function here that uses DataIterator from above to
  1. loop through all test images
  2. calculate the metric for every test image
  3. return the average metric across all test images

  """
  total_metric = 0.0
  num_images = 0

  for pred, ground_truth in DataIterator():
    if pred is not None and ground_truth is not None:
        metric_value = metric(pred, ground_truth)
        total_metric += metric_value
        num_images += 1

  if num_images == 0:
     print("No test images found.")
     return 0.0

  average_metric = total_metric / num_images
  return average_metric
Code
calculate_error_score()
0.8251722564708581

Question 2 Answer

Code
!python train.py --dataroot ./datasets/new_seq_data/ --name nuclei --model pix2pix --direction AtoB --n_epochs 20 --n_epochs_decay 1 --save_epoch_freq 1 --checkpoints_dir main --lr 0.002
----------------- Options ---------------
               batch_size: 1                             
                    beta1: 0.5                           
          checkpoints_dir: main                             [default: ./checkpoints]
           continue_train: False                         
                crop_size: 256                           
                 dataroot: ./datasets/new_seq_data/         [default: None]
             dataset_mode: aligned                       
                direction: AtoB                          
              display_env: main                          
             display_freq: 400                           
               display_id: 1                             
            display_ncols: 4                             
             display_port: 8097                          
           display_server: http://localhost              
          display_winsize: 256                           
                    epoch: latest                        
              epoch_count: 1                             
                 gan_mode: vanilla                       
                  gpu_ids: 0                             
                init_gain: 0.02                          
                init_type: normal                        
                 input_nc: 3                             
                  isTrain: True                             [default: None]
                lambda_L1: 100.0                         
                load_iter: 0                                [default: 0]
                load_size: 286                           
                       lr: 0.002                            [default: 0.0002]
           lr_decay_iters: 50                            
                lr_policy: linear                        
         max_dataset_size: inf                           
                    model: pix2pix                          [default: cycle_gan]
                 n_epochs: 20                               [default: 100]
           n_epochs_decay: 1                                [default: 100]
               n_layers_D: 3                             
                     name: nuclei                           [default: experiment_name]
                      ndf: 64                            
                     netD: basic                         
                     netG: unet_256                      
                      ngf: 64                            
               no_dropout: False                         
                  no_flip: False                         
                  no_html: False                         
                     norm: batch                         
              num_threads: 4                             
                output_nc: 3                             
                    phase: train                         
                pool_size: 0                             
               preprocess: resize_and_crop               
               print_freq: 100                           
             save_by_iter: False                         
          save_epoch_freq: 1                                [default: 5]
         save_latest_freq: 5000                          
           serial_batches: False                         
                   suffix:                               
         update_html_freq: 1000                          
                use_wandb: False                         
                  verbose: False                         
       wandb_project_name: CycleGAN-and-pix2pix          
----------------- End -------------------
dataset [AlignedDataset] was created
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:558: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(
The number of training images = 427
initialize network with normal
initialize network with normal
model [Pix2PixModel] was created
---------- Networks initialized -------------
[Network G] Total number of parameters : 54.414 M
[Network D] Total number of parameters : 2.769 M
-----------------------------------------------
Setting up a new session...
create web directory main/nuclei/web...
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
learning rate 0.0020000 -> 0.0020000
/usr/lib/python3.10/multiprocessing/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
  self.pid = os.fork()
(epoch: 1, iters: 100, time: 0.087, data: 0.143) G_GAN: 0.586 G_L1: 6.048 D_real: 1.075 D_fake: 0.677 
(epoch: 1, iters: 200, time: 0.088, data: 0.002) G_GAN: 1.217 G_L1: 3.644 D_real: 1.859 D_fake: 0.247 
(epoch: 1, iters: 300, time: 0.076, data: 0.002) G_GAN: 0.964 G_L1: 10.871 D_real: 0.556 D_fake: 0.527 
(epoch: 1, iters: 400, time: 0.264, data: 0.009) G_GAN: 0.988 G_L1: 8.205 D_real: 0.432 D_fake: 0.529 
saving the model at the end of epoch 1, iters 427
End of epoch 1 / 21      Time Taken: 33 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 2, iters: 73, time: 0.088, data: 0.002) G_GAN: 1.048 G_L1: 4.106 D_real: 1.020 D_fake: 0.405 
(epoch: 2, iters: 173, time: 0.089, data: 0.002) G_GAN: 0.919 G_L1: 4.347 D_real: 0.280 D_fake: 0.596 
(epoch: 2, iters: 273, time: 0.089, data: 0.002) G_GAN: 1.125 G_L1: 15.107 D_real: 0.027 D_fake: 0.487 
(epoch: 2, iters: 373, time: 0.260, data: 0.002) G_GAN: 0.691 G_L1: 5.461 D_real: 0.378 D_fake: 0.756 
saving the model at the end of epoch 2, iters 854
End of epoch 2 / 21      Time Taken: 25 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 3, iters: 46, time: 0.091, data: 0.002) G_GAN: 1.146 G_L1: 9.877 D_real: 0.018 D_fake: 0.935 
(epoch: 3, iters: 146, time: 0.086, data: 0.002) G_GAN: 1.050 G_L1: 6.668 D_real: 0.314 D_fake: 1.927 
(epoch: 3, iters: 246, time: 0.092, data: 0.002) G_GAN: 1.341 G_L1: 6.983 D_real: 1.525 D_fake: 0.290 
(epoch: 3, iters: 346, time: 0.252, data: 0.002) G_GAN: 1.493 G_L1: 15.646 D_real: 0.015 D_fake: 0.344 
saving the model at the end of epoch 3, iters 1281
End of epoch 3 / 21      Time Taken: 30 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 4, iters: 19, time: 0.091, data: 0.002) G_GAN: 1.207 G_L1: 9.371 D_real: 0.062 D_fake: 0.463 
(epoch: 4, iters: 119, time: 0.082, data: 0.002) G_GAN: 0.773 G_L1: 2.439 D_real: 0.641 D_fake: 0.514 
(epoch: 4, iters: 219, time: 0.094, data: 0.002) G_GAN: 1.282 G_L1: 4.250 D_real: 0.276 D_fake: 0.403 
(epoch: 4, iters: 319, time: 0.291, data: 0.002) G_GAN: 1.035 G_L1: 5.616 D_real: 1.344 D_fake: 0.468 
(epoch: 4, iters: 419, time: 0.094, data: 0.002) G_GAN: 1.019 G_L1: 9.700 D_real: 0.065 D_fake: 0.586 
saving the model at the end of epoch 4, iters 1708
End of epoch 4 / 21      Time Taken: 25 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 5, iters: 92, time: 0.084, data: 0.002) G_GAN: 0.994 G_L1: 3.801 D_real: 1.195 D_fake: 0.414 
(epoch: 5, iters: 192, time: 0.097, data: 0.009) G_GAN: 1.117 G_L1: 2.773 D_real: 1.197 D_fake: 0.373 
(epoch: 5, iters: 292, time: 0.309, data: 0.002) G_GAN: 0.665 G_L1: 6.393 D_real: 0.675 D_fake: 0.799 
(epoch: 5, iters: 392, time: 0.097, data: 0.009) G_GAN: 0.937 G_L1: 4.719 D_real: 0.594 D_fake: 0.457 
saving the model at the end of epoch 5, iters 2135
End of epoch 5 / 21      Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 6, iters: 65, time: 0.087, data: 0.002) G_GAN: 0.696 G_L1: 2.763 D_real: 0.747 D_fake: 0.720 
(epoch: 6, iters: 165, time: 0.096, data: 0.003) G_GAN: 0.844 G_L1: 2.969 D_real: 1.070 D_fake: 0.446 
(epoch: 6, iters: 265, time: 0.325, data: 0.002) G_GAN: 0.983 G_L1: 7.875 D_real: 0.426 D_fake: 0.544 
(epoch: 6, iters: 365, time: 0.093, data: 0.013) G_GAN: 0.785 G_L1: 2.602 D_real: 0.794 D_fake: 0.555 
saving the model at the end of epoch 6, iters 2562
End of epoch 6 / 21      Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 7, iters: 38, time: 0.089, data: 0.003) G_GAN: 1.126 G_L1: 4.470 D_real: 0.341 D_fake: 0.397 
(epoch: 7, iters: 138, time: 0.095, data: 0.009) G_GAN: 0.838 G_L1: 3.879 D_real: 0.982 D_fake: 0.578 
(epoch: 7, iters: 238, time: 0.319, data: 0.002) G_GAN: 0.689 G_L1: 2.480 D_real: 0.746 D_fake: 0.725 
(epoch: 7, iters: 338, time: 0.096, data: 0.002) G_GAN: 0.822 G_L1: 6.051 D_real: 1.074 D_fake: 0.515 
saving the model at the end of epoch 7, iters 2989
End of epoch 7 / 21      Time Taken: 29 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 8, iters: 11, time: 0.094, data: 0.002) G_GAN: 1.011 G_L1: 3.824 D_real: 1.336 D_fake: 0.394 
(epoch: 8, iters: 111, time: 0.085, data: 0.002) G_GAN: 0.746 G_L1: 2.095 D_real: 0.815 D_fake: 0.595 
(epoch: 8, iters: 211, time: 0.268, data: 0.002) G_GAN: 0.974 G_L1: 5.989 D_real: 0.816 D_fake: 0.460 
(epoch: 8, iters: 311, time: 0.090, data: 0.002) G_GAN: 0.948 G_L1: 2.710 D_real: 0.977 D_fake: 0.456 
(epoch: 8, iters: 411, time: 0.097, data: 0.002) G_GAN: 1.044 G_L1: 8.212 D_real: 0.044 D_fake: 0.665 
saving the model at the end of epoch 8, iters 3416
End of epoch 8 / 21      Time Taken: 29 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 9, iters: 84, time: 0.088, data: 0.002) G_GAN: 0.557 G_L1: 2.788 D_real: 0.628 D_fake: 0.998 
(epoch: 9, iters: 184, time: 0.263, data: 0.002) G_GAN: 1.392 G_L1: 7.497 D_real: 0.099 D_fake: 0.713 
(epoch: 9, iters: 284, time: 0.085, data: 0.002) G_GAN: 0.761 G_L1: 2.846 D_real: 0.830 D_fake: 0.563 
(epoch: 9, iters: 384, time: 0.096, data: 0.002) G_GAN: 0.758 G_L1: 3.210 D_real: 0.932 D_fake: 0.480 
saving the model at the end of epoch 9, iters 3843
End of epoch 9 / 21      Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 10, iters: 57, time: 0.086, data: 0.002) G_GAN: 0.839 G_L1: 6.461 D_real: 0.836 D_fake: 0.360 
(epoch: 10, iters: 157, time: 0.259, data: 0.008) G_GAN: 0.757 G_L1: 2.805 D_real: 1.042 D_fake: 0.395 
(epoch: 10, iters: 257, time: 0.085, data: 0.002) G_GAN: 0.979 G_L1: 2.656 D_real: 1.123 D_fake: 0.373 
(epoch: 10, iters: 357, time: 0.095, data: 0.010) G_GAN: 0.694 G_L1: 4.284 D_real: 0.616 D_fake: 0.649 
saving the model at the end of epoch 10, iters 4270
End of epoch 10 / 21     Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 11, iters: 30, time: 0.091, data: 0.002) G_GAN: 0.870 G_L1: 2.525 D_real: 1.023 D_fake: 0.489 
(epoch: 11, iters: 130, time: 0.287, data: 0.002) G_GAN: 1.909 G_L1: 8.800 D_real: 0.337 D_fake: 0.290 
(epoch: 11, iters: 230, time: 0.064, data: 0.002) G_GAN: 1.143 G_L1: 6.079 D_real: 0.193 D_fake: 0.833 
(epoch: 11, iters: 330, time: 0.096, data: 0.004) G_GAN: 0.710 G_L1: 3.080 D_real: 0.538 D_fake: 0.844 
saving the model at the end of epoch 11, iters 4697
End of epoch 11 / 21     Time Taken: 27 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 12, iters: 3, time: 0.134, data: 0.002) G_GAN: 1.131 G_L1: 5.544 D_real: 0.818 D_fake: 0.314 
(epoch: 12, iters: 103, time: 0.277, data: 0.001) G_GAN: 1.229 G_L1: 9.888 D_real: 0.029 D_fake: 0.606 
(epoch: 12, iters: 203, time: 0.096, data: 0.002) G_GAN: 1.264 G_L1: 6.673 D_real: 0.229 D_fake: 0.409 
(epoch: 12, iters: 303, time: 0.095, data: 0.002) G_GAN: 0.899 G_L1: 4.113 D_real: 1.100 D_fake: 0.407 
saving the latest model (epoch 12, total_iters 5000)
(epoch: 12, iters: 403, time: 0.093, data: 0.002) G_GAN: 1.282 G_L1: 4.151 D_real: 1.146 D_fake: 0.310 
saving the model at the end of epoch 12, iters 5124
End of epoch 12 / 21     Time Taken: 27 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 13, iters: 76, time: 0.281, data: 0.002) G_GAN: 0.706 G_L1: 3.928 D_real: 0.790 D_fake: 0.592 
(epoch: 13, iters: 176, time: 0.092, data: 0.002) G_GAN: 0.800 G_L1: 2.818 D_real: 0.867 D_fake: 0.532 
(epoch: 13, iters: 276, time: 0.077, data: 0.002) G_GAN: 0.897 G_L1: 4.423 D_real: 0.567 D_fake: 0.503 
(epoch: 13, iters: 376, time: 0.098, data: 0.002) G_GAN: 1.138 G_L1: 2.300 D_real: 1.324 D_fake: 0.321 
saving the model at the end of epoch 13, iters 5551
End of epoch 13 / 21     Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 14, iters: 49, time: 0.278, data: 0.002) G_GAN: 0.989 G_L1: 5.559 D_real: 0.322 D_fake: 0.499 
(epoch: 14, iters: 149, time: 0.095, data: 0.002) G_GAN: 0.609 G_L1: 2.728 D_real: 0.581 D_fake: 0.750 
(epoch: 14, iters: 249, time: 0.096, data: 0.002) G_GAN: 0.769 G_L1: 3.295 D_real: 0.604 D_fake: 0.666 
(epoch: 14, iters: 349, time: 0.095, data: 0.002) G_GAN: 0.733 G_L1: 1.984 D_real: 0.612 D_fake: 0.657 
saving the model at the end of epoch 14, iters 5978
End of epoch 14 / 21     Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 15, iters: 22, time: 0.293, data: 0.002) G_GAN: 1.267 G_L1: 9.006 D_real: 0.164 D_fake: 0.821 
(epoch: 15, iters: 122, time: 0.094, data: 0.002) G_GAN: 0.765 G_L1: 3.335 D_real: 0.525 D_fake: 0.713 
(epoch: 15, iters: 222, time: 0.095, data: 0.002) G_GAN: 1.041 G_L1: 3.959 D_real: 0.275 D_fake: 0.745 
(epoch: 15, iters: 322, time: 0.094, data: 0.002) G_GAN: 0.775 G_L1: 1.818 D_real: 0.439 D_fake: 0.830 
(epoch: 15, iters: 422, time: 0.191, data: 0.002) G_GAN: 0.763 G_L1: 2.661 D_real: 0.854 D_fake: 0.431 
saving the model at the end of epoch 15, iters 6405
End of epoch 15 / 21     Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 16, iters: 95, time: 0.095, data: 0.002) G_GAN: 0.902 G_L1: 3.343 D_real: 0.700 D_fake: 0.473 
(epoch: 16, iters: 195, time: 0.094, data: 0.002) G_GAN: 1.415 G_L1: 7.038 D_real: 0.964 D_fake: 0.259 
(epoch: 16, iters: 295, time: 0.096, data: 0.002) G_GAN: 0.953 G_L1: 2.611 D_real: 0.706 D_fake: 0.494 
(epoch: 16, iters: 395, time: 0.289, data: 0.002) G_GAN: 1.028 G_L1: 6.327 D_real: 0.241 D_fake: 0.836 
saving the model at the end of epoch 16, iters 6832
End of epoch 16 / 21     Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 17, iters: 68, time: 0.095, data: 0.002) G_GAN: 0.829 G_L1: 3.864 D_real: 0.436 D_fake: 0.853 
(epoch: 17, iters: 168, time: 0.094, data: 0.002) G_GAN: 1.207 G_L1: 8.182 D_real: 0.393 D_fake: 0.590 
(epoch: 17, iters: 268, time: 0.095, data: 0.002) G_GAN: 0.980 G_L1: 2.963 D_real: 0.658 D_fake: 0.628 
(epoch: 17, iters: 368, time: 0.278, data: 0.002) G_GAN: 0.822 G_L1: 2.677 D_real: 0.688 D_fake: 0.600 
saving the model at the end of epoch 17, iters 7259
End of epoch 17 / 21     Time Taken: 26 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 18, iters: 41, time: 0.094, data: 0.002) G_GAN: 1.819 G_L1: 9.183 D_real: 0.146 D_fake: 0.309 
(epoch: 18, iters: 141, time: 0.097, data: 0.002) G_GAN: 1.119 G_L1: 4.914 D_real: 0.192 D_fake: 0.625 
(epoch: 18, iters: 241, time: 0.096, data: 0.002) G_GAN: 0.628 G_L1: 4.281 D_real: 0.358 D_fake: 0.871 
(epoch: 18, iters: 341, time: 0.446, data: 0.002) G_GAN: 0.917 G_L1: 2.483 D_real: 0.896 D_fake: 0.492 
saving the model at the end of epoch 18, iters 7686
End of epoch 18 / 21     Time Taken: 27 sec
learning rate 0.0020000 -> 0.0020000
(epoch: 19, iters: 14, time: 0.093, data: 0.002) G_GAN: 0.789 G_L1: 2.937 D_real: 0.839 D_fake: 0.601 
(epoch: 19, iters: 114, time: 0.096, data: 0.002) G_GAN: 0.639 G_L1: 4.513 D_real: 0.291 D_fake: 0.892 
(epoch: 19, iters: 214, time: 0.096, data: 0.002) G_GAN: 1.036 G_L1: 4.827 D_real: 0.296 D_fake: 0.605 
(epoch: 19, iters: 314, time: 0.302, data: 0.002) G_GAN: 0.604 G_L1: 2.776 D_real: 0.747 D_fake: 0.665 
(epoch: 19, iters: 414, time: 0.094, data: 0.002) G_GAN: 1.134 G_L1: 13.114 D_real: 0.044 D_fake: 0.591 
saving the model at the end of epoch 19, iters 8113
End of epoch 19 / 21     Time Taken: 29 sec
learning rate 0.0020000 -> 0.0010000
(epoch: 20, iters: 87, time: 0.079, data: 0.002) G_GAN: 0.766 G_L1: 3.552 D_real: 0.679 D_fake: 0.664 
(epoch: 20, iters: 187, time: 0.094, data: 0.002) G_GAN: 0.753 G_L1: 2.587 D_real: 0.617 D_fake: 0.657 
(epoch: 20, iters: 287, time: 0.281, data: 0.002) G_GAN: 0.835 G_L1: 5.238 D_real: 0.179 D_fake: 0.807 
(epoch: 20, iters: 387, time: 0.092, data: 0.002) G_GAN: 0.806 G_L1: 3.854 D_real: 0.622 D_fake: 0.597 
saving the model at the end of epoch 20, iters 8540
End of epoch 20 / 21     Time Taken: 26 sec
learning rate 0.0010000 -> 0.0000000
(epoch: 21, iters: 60, time: 0.081, data: 0.002) G_GAN: 1.085 G_L1: 2.729 D_real: 0.832 D_fake: 0.426 
(epoch: 21, iters: 160, time: 0.096, data: 0.002) G_GAN: 1.188 G_L1: 6.906 D_real: 0.538 D_fake: 0.395 
(epoch: 21, iters: 260, time: 0.304, data: 0.002) G_GAN: 0.840 G_L1: 3.016 D_real: 0.782 D_fake: 0.592 
(epoch: 21, iters: 360, time: 0.095, data: 0.014) G_GAN: 0.890 G_L1: 6.335 D_real: 0.433 D_fake: 0.604 
saving the model at the end of epoch 21, iters 8967
End of epoch 21 / 21     Time Taken: 28 sec
Code
!python test.py --dataroot ./datasets/new_seq_data/ --direction AtoB --model pix2pix --name nuclei --checkpoints_dir main
----------------- Options ---------------
             aspect_ratio: 1.0                           
               batch_size: 1                             
          checkpoints_dir: main                             [default: ./checkpoints]
                crop_size: 256                           
                 dataroot: ./datasets/new_seq_data/         [default: None]
             dataset_mode: aligned                       
                direction: AtoB                          
          display_winsize: 256                           
                    epoch: latest                        
                     eval: False                         
                  gpu_ids: 0                             
                init_gain: 0.02                          
                init_type: normal                        
                 input_nc: 3                             
                  isTrain: False                            [default: None]
                load_iter: 0                                [default: 0]
                load_size: 256                           
         max_dataset_size: inf                           
                    model: pix2pix                          [default: test]
               n_layers_D: 3                             
                     name: nuclei                           [default: experiment_name]
                      ndf: 64                            
                     netD: basic                         
                     netG: unet_256                      
                      ngf: 64                            
               no_dropout: False                         
                  no_flip: False                         
                     norm: batch                         
                 num_test: 50                            
              num_threads: 4                             
                output_nc: 3                             
                    phase: test                          
               preprocess: resize_and_crop               
              results_dir: ./results/                    
           serial_batches: False                         
                   suffix:                               
                use_wandb: False                         
                  verbose: False                         
       wandb_project_name: CycleGAN-and-pix2pix          
----------------- End -------------------
dataset [AlignedDataset] was created
initialize network with normal
model [Pix2PixModel] was created
loading the model from main/nuclei/latest_net_G.pth
---------- Networks initialized -------------
[Network G] Total number of parameters : 54.414 M
-----------------------------------------------
creating web directory ./results/nuclei/test_latest
processing (0000)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_3_10.png']
processing (0005)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_3_15.png']
processing (0010)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_3_5.png']
processing (0015)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_4_10.png']
processing (0020)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_4_16.png']
processing (0025)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_4_6.png']
processing (0030)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_5_11.png']
processing (0035)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_5_16.png']
processing (0040)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_5_5.png']
processing (0045)-th image... ['./datasets/new_seq_data/test/A1_registered_HEnorm_crop_1_6_10.png']
Code
calculate_error_score()
0.8325414751422656