• fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    I remember this. I also remember using scp instead. And ftp, if I go back far enough. rsync is still my friend though zfs has mostly replaced it now.

    • BoneALisa@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      How has zfs replaced rsync for you? One is a filesystem, and the other is a filesyncing tool. Does zfs do something im not aware of lol?

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        I used to use rsync to copy data from my storage array on one machine to an external and an off site backup. Since a lot of it was code, it always took forever to scan all the small files, and I had to script unlocking remote partitions.

        With encrypted ZFS, I can just zfs snap then zfs send, and it does the same thing at the block level, raw, so way faster, less data transfer, and no need to send a key or passphrase unless I need to mount it at the destination (meaning a cloud provider could never know the data, for instance).

        ZFS is also recursive, so if I have s/storage and /storage/stuff defined, I can snap and send either level, which makes it as versatile as rsync.

        • BoneALisa@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Oh interesting, i am not super familar with zfs’ tools, so thats pretty cool! Ill have to look at that for my storage array.

  • stembolts@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    This application looks fine to me.

    Clearly labeled sections.

    Local on one side, remote on the other

    Transfer window on bottom.

    No space for anything besides function, is the joke going over my head?

    • tiramichu@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I’m sure there’s nothing wrong with the program at all =)

      Modern webapp deployment approach is typically to have an automated continuous build and deployment pipeline triggered from source control, which deploys into a staging environment for testing, and then promotes the same precise tested artifacts to production. Probably all in the cloud too.

      Compared to that, manually FTPing the files up to the server seems ridiculously antiquated, to the extent that newbies in the biz can’t even believe we ever did it that way. But it’s genuinely what we were all doing not so long ago.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      The year Linux takes over the desktops!

      I fell like the reason nobody uses FileZila and etc anymore is because everybody that wanted it migrated to Linux already. So seriously, it already happened.

  • FQQD@lemmy.ohaa.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    People don’t use FileZilla for server management anymore? I feel like I’ve missed that memo.

    • RonSijm@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I suppose in the days of ‘Cloud Hosting’ a lot of people (hopefully) don’t just randomly upload new files (manually) on a server anymore.

      Even if you still just use normal servers that behave like this, a better practice would be to have a build server that creates builds, like whenever you check code into the Main branch, it’ll create a deploy for the server, and you deploy it from there - instead of compiling locally, opening filezilla and doing an upload.

      If you’re using ‘Cloud Hosting’ - for example AWS - If you use VMs or bare metal - you’d maybe create Elastic Beanstalk images and upload a new Application or Machine Image as a new version, and deploy that in a more managed way. Or if you’re using Docker, you just upload a new Docker image into a Docker registry and deploy those.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        For some of my sites, I still build on my PC and rsync the build directory across. I’ve been meaning to set up Gitlab or something similar and configure automated deployments.

        • amazing_stories@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          This is what I do because my sites aren’t complicated enough to warrant a build system. Personally I think most websites out there are over-engineered. Example: a Discord friend made a React site that displays stats from a gaming server. It looks nice, but you literally can’t hyperlink to any of the data, it can only be loaded dynamically and only looks coherent on a phone in portrait mode. There are a lot of people following trends (some good trends) but without really thinking about why.

          • dan@upvote.au
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            3 months ago

            I’m starting to like the htmx model a lot. Server-rendered app that uses HTML attributes to configure the dynamic bits (e.g. which URL to hit and which DOM element to insert the response into). Don’t have to write much JS (or any in some cases).

            you literally can’t hyperlink to any of the data

            I thought most React-powered frameworks use a URL router out-of-the-box these days? The developer does need to have a rough idea what they’re doing, though.