Editing/Restoring photos (My Method)

Discussion in 'Misc Discussion' started by Glenn, Jan 27, 2024.

  1. Glenn

    Glenn Administrator Staff Member

    I've been spending my time repairing/restoring some photos on some reddit threads where users post damaged photo's for repair.

    I just thought I'd share my methods, so others may use them or offer their own tips:

    1. Load image into Photoshop and run Noiseware to clean up noise
    2. Run Dust & Scratches to tidy up the worst of it
    3. Manually use healing brush to remove any defects and expand the non square edge photos to fill the image).
    4. Use the smudge brush to blend jagged edges or fix other issues (soften)
    5. Use Auto Tone or auto color to make the image use the full range.
    6. Use Topaz Photo AI to upscale and or recover faces etc
    7. Use https://palette.fm/ to colorize the photo
    8. Load in the uncolored image in photoshop and put the palette fm color layer above it set to Color in the layer settings, stretch the color layer to fit the full res photo.
    9. Hide the color layer and use Neural Filters to Photo Restore and or Colorize it.
    10 Pick the best of both color methods and blend layers to suit.
    11. Save image as a .png and open it with https://playground.com/create
    12. Load on the website (image to image) and pick the png, set to use 90% of the image in the AI generated image, set the prompt to ., change the Filter to Realistic Stock Photo (or one that suits), set Size to max (landscape or portrait depending on source), press Generate.
    13. Click the generated image I like and press Action, Upscale, then download the image.
    14. Put it back into photoshop to finish it off including adding noise back to the image to make it more authentic.

    Yeah it's a lot of work, but it is great to get the results your after and the people requesting the repairs really appreciate it.

    I've found that Affinity Photo 2 is able to do a FFT noise reduction on textured photo scans and or remove repeating noise really really well, this can be handy for old photo scans also, I do this before I import it into photoshop and continue from step 1 :D
     
  2. bphlpt

    bphlpt A lowly staff member Staff Member

    Can you post some before/after examples that you're especially proud of?
     
  3. Glenn

    Glenn Administrator Staff Member

    I'll just share my latest one:

    q7yjofees1fc12.jpg q7yjofees1fc12-1-Done-Final-Col.jpg
     
    bphlpt likes this.
  4. Glenn

    Glenn Administrator Staff Member

    95e3uee68obc1-Orig.jpeg 95e3uee68obc1-Edit4.jpg
     
    bphlpt likes this.
  5. Glenn

    Glenn Administrator Staff Member

  6. Glenn

    Glenn Administrator Staff Member

    Dones.jpg Dones-New.png
    Check out the changes it made to the faces on this one, it does an amazing job without any messing about :)
     
    bphlpt likes this.
  7. bphlpt

    bphlpt A lowly staff member Staff Member

    That site does an AMAZING job!
     
  8. Glenn

    Glenn Administrator Staff Member

    htehbjw01bfc1.jpg htehbjw01bfc1-Edit3-topaz-denoise-faceai-Best4-Final.png

    I am getting a little more practice in :)
     
    bphlpt likes this.
  9. bphlpt

    bphlpt A lowly staff member Staff Member

    At what point in your OP process do you see what help CodeFormer can provide for faces? And, out of curiosity, about how long does it take per photo for the whole process, now that you've been doing it a while? I know it could vary wildly depending on all sorts of things, but roughly how long on average? Or maybe a rough time range would be better.
     
  10. Glenn

    Glenn Administrator Staff Member

    13.5. enhance face to look true to life

    Most edits that only need upscaling and colour correction only take a few minutes. If it needs scratches repaired or spills/textures removed it can take between 3 minutes and just over an hour. Then the web sites can add 5-10 minutes to enhance and colour thing.

    But yeah it varies from minutes to an hour or so.
     
  11. Glenn

    Glenn Administrator Staff Member

    A new trick I've adopted for the AI generation is do one at 90% so the faces stay true and another at 50% for the background to be repaired enough, then blend the faces on the new background, works a treat :)

    On a related note, the AI stuff prefers to use colour, so best apply colour then generate, if you need to recolour later, you always can.
     
    Last edited: Feb 1, 2024
  12. Glenn

    Glenn Administrator Staff Member

    I found another resource:

    https://stablediffusion.gigantic.work/

    This site has the tools to easily change eye colour etc just by using text in the prompt, does take a little getting used to, but once you do, it's as useful as Adobe Firefly but at the moment it's available for free (or donations) - if I start using it enough I'll forward some donations as it will save me time manually making small changes to my images.
     
  13. Glenn

    Glenn Administrator Staff Member

    Here is some more tips I've discovered along the way.

    Edit out the defects first
    then correct the tone and/or colours
    remove bad noise
    use Camera Raw to dehaze/ get a better light quality/contrast.
    run through a mild photo recovery in PS
    Colorize before you continue (at least basic to get the better generated skin)
    Upscale using GFPGAN or topaz to have the skin texture better (if you upscale before you run it, it would look flat).
    Use AI to generate from this completed image to make the clothes/backgrounds better.

    Final steps involve camera raw, colour/light changes etc

    If you need to you can overlay the original edit (defects) with the final edit colorized on the very top. then you can blend the original edit with the final edit to make it look more authentic to the original (this step isn't required for hi quality sources).

    AI does an amazing job of most things, except for keeping the eyes, nose, mouth and ears looking similar to the originals, I guess most people have unique features that make them, them. So these may need to be corrected between steps or just overlayed at the end to some degree.
     
    pacav69, The Freezer and Trouba like this.
  14. Glenn

    Glenn Administrator Staff Member

    I've got a different work flow that I will properly document when I have time, but I'll list the new tools I use as they are better than the old ones:

    https://www.myheritage.com/incolor

    You only get one if you don't pay, but the colours are so much better than using any others, the enhancer is pretty good sometimes too.

    I found a tool called Stability Matrix, this allows you to install many Stable Diffusion tools (over 200GB when you get them all and a few models so have space), I use foocus to outpaint (make a photo bigger than the original shot, it is amazing to see), and SD WebUI Forge - in the Extras tab it actually has CodeFormer (GFPGan etc to run much faster locally), but it can generate images or do Img2Img to smooth off any noisy bits. https://github.com/LykosAI/StabilityMatrix/releases

    Now the final step that makes a HUGE difference is https://app.decohere.ai/ - When you upscale with this it will add texture details to clothing and makes the photos pop - considering it's only designed to make short movies originally the upscale feature they added is incredible (though sometimes makes patterns when it's too noisy) - I leave Creativity on 2 so you may have to overlay it on the original and put back the original eyes, nose, mouth and ears for it to be the same person. But it usually keeps the hair the same style and colour but makes it have way more detail (so another win). You can probably get the same results with Stability Matrix, but I've not figured out how yet, so I was happy to pay under $15 for 150 images (You get 10 free ones a month BTW).
     
  15. Glenn

    Glenn Administrator Staff Member

  16. Glenn

    Glenn Administrator Staff Member

    Today I have been learning about Frequency separation:


    The thing with it is it works to fix spills mould and other types of damage similar you might come across. The video also mentions a LOT of keyboard shortcuts that all Photoshop users should learn :)
     
  17. Glenn

    Glenn Administrator Staff Member

    Here is the latest methods I've been using (Copied from someone who asked me today, I'll have to do up a video of it all in action when I find time:

    I used Photoshop to get it cropped and perspective correct, I then use auto tone/colour to get it closer to neutral and full dynamic range. If it's sepia or bad colour I make it grayscale.

    I then use My heritage to colour/repair/enhance (sometimes it fails to get one right so I save between each process).

    I put it through CodeFormer (SD Web UI Forge locally, ran through Stability Matrix) to do a better job on the faces.

    I then use playground.ai to make things cleaner.

    Back to Photoshop to blend all that together as well as using dust and scratch removal, Neural Filters and camera raw.

    Then I use fooocus or Photoshop to outpaint the image.

    Then I use decohere.ai/create to upscale, do fabric and clean it up even more. This requires you put the faces back as it loses identity (can often keep hair and some features).

    Then I post the results and ask for a small tip when I can, to fund my subscription to Photoshop and decohere.ai.

    Most photos taken me between 30 minutes to many hours (depending on damage/quality), so I do think working on my skills and finding the best workflow is worth paying me for

    There is other tools I use, but this is the general flow I use.

    If you thought I'd say Remini.ai and be done with it, that's a lot of other restorers who just want to get in first with sometimes reasonable results (I've never used that service myself).
     

Share This Page