diff --git a/3-run-and-customized.ipynb b/3-iterative-boltzmann-inversion/3-ibi.ipynb
similarity index 88%
rename from 3-run-and-customized.ipynb
rename to 3-iterative-boltzmann-inversion/3-ibi.ipynb
index 3723c25126a2c16b3c5280dfceab85b1af7781ab..ac3fd26301842158060a73b35403655d21aaadba 100644
--- a/3-run-and-customized.ipynb
+++ b/3-iterative-boltzmann-inversion/3-ibi.ipynb
@@ -2,13 +2,13 @@
  "cells": [
   {
    "cell_type": "markdown",
-   "id": "024005b4",
+   "id": "9218f5d2",
    "metadata": {},
    "source": [
-    "# Learn to run simulations and implement customized coarse-grained polymer models\n",
+    "# Implementing customized coarse-grained polymer models\n",
+    "\n",
+    "As we've dived into the polymer objects in arbd and learned the basic usage of arbd, now it's time to put everything together and build a custom coarse-grained protein model using arbd! Here we are going to take a coarse-grained model, the Hydrophobicity Scale Model as implemented by Dignon et al https://doi.org/10.1371/journal.pcbi.1005941, as our starting point, and we will apply the Iterative Boltzmann Inversion (IBI) procedure to further coarsen the model, combining 4 amino acids into a single bead. \n",
     "\n",
-    "As we've dived into the polymer objects in arbd and learned the basic usage of arbd, now it's time to put everything together and run coarse-grained protein model using arbd! \n",
-    "We will construct a polymer model by coarse-graining the HpsModel.\n",
     "\n",
     "## Step 1: Model Construction\n",
     "\n",
@@ -18,10 +18,11 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "id": "28a28e18",
+   "id": "0076eb69",
    "metadata": {},
    "outputs": [],
    "source": [
+    "## Usually this script would be put in it's own python file so it can be imported into multiple other scripts\n",
     "import numpy as np\n",
     "from pathlib import Path\n",
     "from arbdmodel import ArbdEngine\n",
@@ -98,7 +99,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "id": "04d59bef",
+   "id": "fa9b96ae",
    "metadata": {},
    "outputs": [],
    "source": [
@@ -154,10 +155,10 @@
   },
   {
    "cell_type": "markdown",
-   "id": "5fb80332",
+   "id": "54d218e7",
    "metadata": {},
    "source": [
-    "## Step 2: Example script to coarsen the model\n",
+    "## Step 2: Setting up a coarser model\n",
     "This is an example script to coarsen the model by combining {amino_acids_per_bead_group} amino acids (GLFG) into a single bead,\n",
     "neglecting the C-to-N terminal direction.\n",
     "\n",
@@ -167,13 +168,14 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "id": "f498d24c-3ce8-48d4-a5a4-c855eb73930f",
+   "id": "f3046111",
    "metadata": {},
    "outputs": [],
    "source": [
+    "## Usually this script would be put in it's own python file so it can be imported into multiple other scripts\n",
+    "from arbdmodel import ParticleType, PointParticle\n",
     "from arbdmodel.ibi import IBIBond, IBIAngle, IBIDihedral, IBINonbonded\n",
-    "\n",
-    "from info import _seq, polymer_mass\n",
+    "from arbdmodel.polymer import PolymerBeads, PolymerModel\n",
     "\n",
     "amino_acids_per_bead_group=4\n",
     "\n",
@@ -199,9 +201,8 @@
     "\n",
     "    bond = IBIBond('myibi-bond', degrees_of_freedom=[], max_force=10) # These names are important because they are used to write files; avoid name collisions!\n",
     "    angle = IBIAngle('myibi-angle', degrees_of_freedom=[], max_force=10)\n",
-    "    ## No need to override PolymerBeads initialization method\n",
+    "\n",
     "    def __init__(self, polymer, sequence, monomers_per_bead_group, directory = './', **kwargs):\n",
-    "        # self.directory = directory # root path were IBI iterations will be performed and we store IBI potentials\n",
     "        ## This is a little awkard and should be reworked. IBIPotentials should instead record their absolute Path when created. \n",
     "        for pot in (type(self).bond, type(self).angle):\n",
     "            pot.filename_prefix = f'{directory}/IBIPotentials/{pot.name}'\n",
@@ -338,16 +339,16 @@
   },
   {
    "cell_type": "markdown",
-   "id": "7ff51bc8-2453-41d5-92c3-96cd992a46e4",
+   "id": "0b48a10c",
    "metadata": {},
    "source": [
-    "## Step 3: Coarsing the model even more by MDAnalysis"
+    "## Step 3: Mapping the fine grained trajectory to a coarser representation"
    ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "id": "b5aef887",
+   "id": "2a657ca9",
    "metadata": {},
    "outputs": [],
    "source": [
@@ -355,7 +356,7 @@
     "from MDAnalysis.analysis.base import AnalysisFromFunction\n",
     "from MDAnalysis.coordinates.memory import MemoryReader\n",
     "\n",
-    "from coarse_model import MyIBIModel\n",
+    "## from coarse_model import MyIBIModel\n",
     "\n",
     "## MDAnalysis trick to coarsen our model\n",
     "def _fine_to_coarse(*atom_groups):\n",
@@ -390,18 +391,18 @@
   },
   {
    "cell_type": "markdown",
-   "id": "0e7e49fa-53c9-47c8-98b3-0062d4e354cb",
+   "id": "427aee80",
    "metadata": {},
    "source": [
     "## Step 4: Run the coarser IBI simulations\n",
     "\n",
-    "We can run another simulation with the model we've built by updating the potentials against the target distributions with the model we've built.\n"
+    "No that we have target distributions of the coarse-grained degrees of freedom that we plan to apply forces to, we can perform the IBI process, iteratively updating the coarse-grained potentials by the log of the ratio of the simulated coarse-grained distribution relative to the target distribution. The IBI simulations should have the same composition as the original fine-resolution simulations.\n"
    ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "id": "db750860-87f8-4dda-b621-154660ef1b16",
+   "id": "9a1949d8",
    "metadata": {},
    "outputs": [],
    "source": [
@@ -439,50 +440,51 @@
     "\n",
     "model.set_damping_coefficient( 100 )\n",
     "\n",
-    "model.run_IBI(50, engine=engine, replicas=4)"
+    "model.run_IBI(50, engine=engine, replicas=4) ;# 50 steps with 4 replicas for better sampling "
    ]
   },
   {
    "cell_type": "markdown",
-   "id": "64f93e5d-cc09-4215-ab83-1d8e8da409da",
+   "id": "10edfa2e",
    "metadata": {},
    "source": [
-    "## Step 5: Visulization"
+    "## Step 5: Visualization"
    ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "id": "381cea90-f0d6-4c82-8638-483b8a80b735",
+   "id": "16a752c3",
    "metadata": {},
    "outputs": [],
    "source": [
-    "import glob,re\n",
-    "\n",
+    "%matplotlib notebook\n",
     "import matplotlib as mpl\n",
     "# mpl.use('agg')\n",
     "from matplotlib import pyplot as plt\n",
-    "from itertools import chain\n",
     "from matplotlib.animation import FuncAnimation\n",
     "\n",
-    "from scipy.optimize import curve_fit\n",
+    "import glob,re\n",
+    "from pathlib import Path\n",
+    "\n",
+    "import numpy as np\n",
     "\n",
     "\"\"\" Load data \"\"\"\n",
     "from natsort import natsorted\n",
     "from glob import glob\n",
     "\n",
     "prefixes_and_limits = [\n",
-    "    ## prefix, xlimits, rho_ylimits, u_ylimits\n",
-    "    ('myibi-bond', (2,20), (0,0.012), (-0.1,5)),\n",
-    "    ('myibi-angle', (0,180), (0,0.02), (0,4)),\n",
+    "    ## prefix, xlabel, xlimits, rho_ylimits, u_ylimits\n",
+    "    ('myibi-bond', r'Distance ($\\AA$)', (2,20), (0,0.012), (-0.1,5)),\n",
+    "    ('myibi-angle', 'Angle (deg.)', (0,180), (0,0.02), (0,4)),\n",
     "    # ('myibi-dihedral*', (None,None), (0,0.025), (-0.1,1.5)),\n",
-    "    ('nb-P_P', (0,50), (0,0.01), (-0.2,2)),\n",
+    "    ('nb-P_P', r'Distance ($\\AA$)', (0,50), (0,0.01), (-0.2,2)),\n",
     "]\n",
     "\n",
     "all_targets = { pre:{\n",
     "    '.'.join( Path(f).stem.split('.')[:-1] ): np.loadtxt(f).T\n",
     "    for f in glob(f'IBIPotentials/{pre}.target.dat') }\n",
-    "                for pre,lx,l1,l2 in prefixes_and_limits}\n",
+    "                for pre,xlabel,lx,l1,l2 in prefixes_and_limits}\n",
     "\n",
     "def get_data(i, prefix):\n",
     "\n",
@@ -493,7 +495,7 @@
     "\n",
     "    tmp = []\n",
     "    for pre,(bin_aa,rho_aa,trash) in all_targets[prefix].items():\n",
-    "        bin_cg,rho_cg,trash = np.loadtxt( 'IBIPotentials/{}-{:03d}-raw.cg.dat'.format(pre,i) ).T\n",
+    "        bin_cg,rho_cg,_ = np.loadtxt( 'IBIPotentials/{}-{:03d}-raw.cg.dat'.format(pre,i) ).T\n",
     "        x1.append(bin_cg)\n",
     "        y1.append(rho_cg)\n",
     "\n",
@@ -506,12 +508,12 @@
     "last_index = int(natsorted(glob(f'{_pre}*-raw.cg.dat'))[-1].replace(_pre,'').replace('-raw.cg.dat',''))\n",
     "\n",
     "\n",
-    "fig,axes = plt.subplots( 2, len(prefixes_and_limits),\n",
+    "fig,axes = plt.subplots( 2, len(prefixes_and_limits), figsize=(2.75,2),\n",
     "                         squeeze=False, constrained_layout=True,\n",
     "                         sharex='col')\n",
     "## Set up plots\n",
     "all_lines = []\n",
-    "for j,((prefix,lx,l1,l2),(ax2,ax1)) in enumerate(zip(prefixes_and_limits, axes.T)):\n",
+    "for j,((prefix,xlabel,lx,l1,l2),(ax2,ax1)) in enumerate(zip(prefixes_and_limits, axes.T)):\n",
     "    lines = []\n",
     "    colors = []\n",
     "    local_lines = []\n",
@@ -534,10 +536,11 @@
     "    lines.append(local_lines)\n",
     "\n",
     "    all_lines.append(lines)\n",
+    "    \n",
     "def init():\n",
     "    # update(last_index)                  # set data to 20th frame for determining limits\n",
     "\n",
-    "    for j,((prefix,lx,l1,l2),(ax2,ax1)) in enumerate(zip(prefixes_and_limits, axes.T)):\n",
+    "    for j,((prefix,xlabel,lx,l1,l2),(ax2,ax1)) in enumerate(zip(prefixes_and_limits, axes.T)):\n",
     "        ax1.set_xlim(*lx)\n",
     "        ax1.set_ylim(*l1)\n",
     "        ax2.set_ylim(*l2)\n",
@@ -548,14 +551,15 @@
     "        if j == 0:\n",
     "            ax1.set_ylabel(f'Density (a.u.)')\n",
     "            ax2.set_ylabel(f'Potential (kcal/mol)')\n",
+    "        ax1.set_xlabel(xlabel)\n",
     "\n",
     "    return [line for lines in all_lines for lines2 in lines for line in lines2]\n",
     "\n",
     "def update(frame):\n",
     "    i = frame\n",
-    "    for j,((prefix,lx,l1,l2),lines) in enumerate(zip(prefixes_and_limits, all_lines)):\n",
+    "    for j,((prefix,xlabel,lx,l1,l2),lines) in enumerate(zip(prefixes_and_limits, all_lines)):\n",
     "        x1s,y1s,x2s,y2s = get_data(i, prefix)\n",
-    "        for (pre,(x,y,trash)),ln in zip(all_targets[prefix].items(), lines[0]):\n",
+    "        for (pre,(x,y,_)),ln in zip(all_targets[prefix].items(), lines[0]):\n",
     "            ln.set_data(x,y)\n",
     "\n",
     "        for ln,x,y in zip(lines[1],x1s,y1s):\n",
@@ -564,21 +568,30 @@
     "            ln.set_data(x,y)\n",
     "    return [line for lines in all_lines for lines2 in lines for line in lines2]\n",
     "    return [line for lines in all_lines for line in lines]\n",
-    "# for i in chain([1], range(1,25)):\n",
-    "#     fig.savefig(\"3-convergence/{:03d}.png\".format(i))\n",
     "\n",
     "ani = FuncAnimation(fig, update, frames=np.arange(1,last_index+1,dtype=int),\n",
     "                    init_func=init, blit=True)\n",
     "\n",
-    "ani.save(__file__.replace('.py','.mp4'))\n",
+    "try:\n",
+    "    ani.save('convergence.mp4')\n",
+    "except:\n",
+    "    print('Unable to save animation')\n",
     "plt.show()\n",
     "                                "
    ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "43a46565",
+   "metadata": {},
+   "outputs": [],
+   "source": []
   }
  ],
  "metadata": {
   "kernelspec": {
-   "display_name": "Python 3 (ipykernel)",
+   "display_name": "Python 3",
    "language": "python",
    "name": "python3"
   },
@@ -592,7 +605,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.8.19"
+   "version": "3.8.18"
   }
  },
  "nbformat": 4,