Setup rclone using rclone config then use the mount command rclone mount remote:bucket/folder/file
The Long Answer Mixed With Personal Experience:
This is a little bit embarrassing, but I’ve been using Backblaze for years at this point and I’ve only mounted my first B2 bucket just now. I have no idea why it’s taken me so long, and I just sort of stumbled across the solution as I was making manual backups to some folders and my mind was blown.
As you may (or may not) already know, Backblaze B2 has a somewhat limited number of options when it comes to interfacing with your files on Linux. At the time of this writing, Backblaze lists just 3 options for essentially using their B2 service on Linux. I’ve only tried Rclone, and over the last few years it’s been fine.
After taking a few minutes to familiarize myself with the docs, I was able to easily set up my B2 bucket. Once I had it set up, I used it as an extremely simple way to just upload large files. It should be noted that also, at the time of this writing, B2’s web interface limits uploads to around 200MB. Anything larger requires you to connect to the server directly via some supported application.
At the beginning, I was just excited to get something to work. After setting up my B2 bucket, I was able to upload and download files and folders using the [bash]rclone copy[/bash] command. And that worked for a bit. Then one day I ran into an issue where I was looking for a specific photo I’d backed up. I knew I’d backed it up, but I didn’t have a way to really search for it especially since it was named something like IMG_0023523.jpg or whatever. And technically, yes, you can preview image files on their web interface, but it is wildly impractical. So that was the first issue I ran into with my limited knowledge of Rclone.
At the time, I also didn’t have a practical way of automating backups to run in the background. It was just up to me to decided whenever I had time to figure out which folders I wanted to backup and figure out where I wanted to save them by either memorizing the exact file path on the B2 bucket, or finding it using rclone lsd remote:bucket to list all directories in a given filepath. Still not very usable or practical. So far, it had been working for folder manual backups and for client file deliveries.
However, just this week, I ran into some issues with my computer which caused the POST to fail, so when I hit the power button to boot up, the lights would come on, and the fans would start spinning, but the screen just stayed black. Absolutely scary considering my terrible backup solution. Ended up fixing the issue and got the machine to boot, but now I’m taking this opportunity to back up everything.
So of course, I went at it using the only way I knew at the time: rclone copy path/to/local/folder/ remote:bucket/folder. And off it went. It’s a sizable backup, and it will take a while, and that’s fine. I was looking around Backblaze’s newly redesigned site and I don’t even know how or what I was looking for, but I came across this mysterious rclone mount command. My mind is will be forever blown and all my issues of previewing image files and scheduling backups will now be a distant memory.
Once you’ve set up and configured Rclone, you’ll need to create an empty folder to mount your B2 bucket inside of. I’ve already got several internal storage drives mounted at /mnt/. Yours may be mounted elsewhere, but I think that’s a pretty standard mount location. Once you’ve created your folder, you’ll want to confirm it has the correct permissions to work with rclone mount.
[bash]sudo mkdir /mnt/MyB2MountPoint[/bash] and then [bash]ls -l /mnt/[/bash]. The -l flag will display all the read/write/execute permissions as well as ownership of the files and folders targeted with ls. If you created your mountpoint folder using sudo like I did, there’s a hot chance that owner of your newly created mount directory will be root. Depending on how you’ve set up your permissions, this may or may not work. In my case, it didn’t.
To fix this, I simply changed folder ownership to my username. chown username /mnt/MyB2MountPoint. After I’d done that, I was able to run rclone mount remote:bucket/folder /mnt/MyB2MountPoint. Just keep in mind, this command was run in the foreground (running in the foreground is the default) which means as soon as that terminal window is closed, or the program is killed with CTRL+C or receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped.
If you prefer, you can mount your B2 drive in the background with rclone mount remote:bucket/folder /mnt/MyB2MountPoint --daemon. If you do this, in order to unmount, you’ll have to do so manually using fusermount -u /path/to/local/mount.
Next, you just set up your MariaDB database and install WordPress.
cd /var/www/html
sudo rm *
sudo wget http://wordpress.org/latest.tar.gz
sudo tar xzf latest.tar.gz
sudo mv wordpress/* .
sudo rm -rf wordpress latest.tar.gz
Give Apache access to the WordPress folders and files:
sudo chown -R www-data: .
Set up MySQL/MariaDB:
sudo mysql_secure_installation
You will be asked to Enter current password for root (enter for none): Since we’re only setting this server up for testing and development purposes on a local network, I’ll go ahead and enter my root password. In a production environment, you’ll definitely want a strong DB password, different than the root password.
Next, you’ll see something like, you've alredy set a root password, so there's no need to update it (but you can still update it here if you like) press enter.
Remove anonymous users : y
Disallow root login remotely : y
Remove test database and access to it : y
Reload privilege tables now : y
You should see: All done! Thanks for using MariaDB!
Create a WordPress Database
sudo mysql -uroot -p then enter your root password (or DB password if you set it up differently in the mysql_secure_installation command.)
Next you’ll see the Maria DB shell. Your prompt will look like MariaDB [(none)]>. Create a new database named wordpress
create database wordpress;
Mind the semicolon- it’s required.
If this was successful, you’ll see Query OK, 1 row affected (0.00 sec)
No you can grant DB privileges to root. Replace PASSWORDGOESHERE with your password.
GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'PASSWORDGOESHERE';
FLUSH PRIVILEGES;
Exit with CTRL + D
Come up with a name for your dev server
I’ll be using deadpool3.com as my example.
Note: At the time of this writing, I own deadpool3.com, but you can use literally any URL you want. You don’t have to own it. (I think google.com may be an exception. They’ve got some fancy code going on and I wasn’t able to get my /etc/hosts to cooperate in my testing.) More on that in a sec.
Configure static IP address
Next, set your static IP address. You can do this by editing one file. Open it by typing sudo nano /etc/dhcpcd.conf
Inside the file (I made a comment above these lines, so I know what I typed if I open this file again later) add the following lines:
So in my case, I made my static IP address 192.168.1.111. Double check your router and network settings for an acceptable range of IP addresses to choose from.
Configure /etc/hosts file on your laptop
Note: You’ll have to edit the /etc/hosts file for every computer on your local network that you’ll be accessing your dev server from. In most home network dev server setups, this will just be a single computer.
Drop the following line at the very bottom of the file ON YOUR LAPTOP or DESKTOP and be sure to replace the IP address with the static IP you just configured in the previous step:
sudo nano /etc/hosts
## Raspberry Pi Server Address ##
192.168.1.111 deadpool3.com
WordPress Configuration
You can either leave your keyboard/mouse/monitor plugged into your pi, and go to localhost in a browser, or grab a different computer on your local network and go to the domain name you set up in /etc/hosts. In my case, it’s deadpool3.com. You should see the WordPress setup screen like this:
Once you’re finished, drop into Settings > Permalinks. Select ‘Post name’ and hit ‘Save Changes’
Configure SSL Encryption
sudo apt install openssl
Create a root key that will be able to generate ssl certs. You can do this by running: mkdir ~/SSLcerts && cd SSLcerts and then openssl genrsa -des3 -out rootCA.key 2048
Create a root certificate by running openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
Now you’ll need to trust your root CA on every machine you’ll be accessing the server with. To do this, you’ll need to copy the rootCA.pem file to your SSL trusted certs directory on every machine on your local network.
Next, create an OpenSSL configuration file for us to use with our server. sudo nano ~/SSLcerts/deadpool3.com.csr.cnf and paste the following into that file, and save.
Next, we’ll use the configuration options we pasted into deadpool3.csr.cnf to create a key file for deadpool3.com. To do this, type: openssl req -new -sha256 -nodes -out deadpool3.com.csr -newkey rsa:2048 -keyout deadpool3.com.key -config <( cat deadpool3.csr.cnf )
That’s all the files we need to make! Lastly, we need to move the .key and .crt files into a directory where apache2 can access them. In that case, I’m just ging to create a new file in my apache2 root directory like so: sudo mkdir /etc/apache2/ssl
Next, just copy them over. sudo cp ~/SSLcerts/{deadpool3.com.key,deadpool3.crt} /etc/apache2/ssl
And that’s SSL certs generated! Done!
Next, you’ll need to tell Apache where those new SSL keys and certs are. To do this, you’ll need to modify the <VirtualHosts> file. By default, you should have one file named /etc/apache2/sites-enabled/000-default.conf. We’ll use this as a template. sudo cp /etc/apache2/sites-enabled/000-default.conf /etc/apache2/sites-enabled/deadpool3.com.conf
We’ll want to change a few things and add some stuff. At the very top, inside the <VirtualHost> tag, you’ll want to change the port number to 443. Next we’ll add the following to line 2 (above the very first commented out line):
No add these lines just below the opening <VirtualHost> tag
#Custom SSL setup
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/deadpool3.crt
SSLCertificateKeyFile /etc/apache2/ssl/deadpool3.com.key
Next, remove the comment (#) in front of ServerName and replace www.example.com with your server name (in my case, www.deadpool3.com). The remaining defaults should do fine for our purposes.
So at the end, your <VirtualHost> file should look something like this:
<VirtualHost *:443>
#Custom SSL setup
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/deadpool3.crt
SSLCertificateKeyFile /etc/apache2/ssl/deadpool3.com.key
# The ServerName directive sets the request scheme, hostname, and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as last resort to host regardless.
# However, you must set it for any further virtual host explicitly.
ServerName www.deadpool3.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# I t is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configureation files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
</VirtualHost>
Next, we’ll need to take a look at the master apache2.conf file. sudo nano /etc/apache2/apache2.conf. This is a super well-commented out file, so it should be largely self-explanatory. We’re going to scroll down until we find the <Directory> tag for /var/www/. Make sure that the AllowOverride parameter is set to All. Your <Directory> tag should look something like this:
<Directory /var/www/>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
Hey guys! What’s going on my name is Matt and today I’m not doing a museum update and I’m not doing a 3d house model those are two other projects that are crazy and I’ve been doing other stuff and so…
This video is for one of the projects that fell through the cracks if I don’t make a video about it no one will ever know that I did it so let’s do it!
So a while back somebody came to me and asked me to do a project for a small like a kids ministry thing at church and they needed a way to get kids checked in and have like an account for each kid on this app, right… And so they needed to play games on the app and they you know keep they earn points and all this kind of stuff to kind of keep track of all of everybody’s stuff. So the first phase of creating this app I thought will they have these bracelets that have RFID tags on them and so if you could use that to track the kids and keep track of how many points they have each. That might be a good starting point so I created an RFID chicken system as a prototype for the app that never happened so here you go:
Alright guys, so the very first thing that I grabbed was a Raspberry Pi and an RFID reader writer module so I can read the the actual tags themselves. So once I got those things together, I started to work on the wiring of the RFID module to the Raspberry Pi itself. And so that was put together with a breadboard and if you guys want some more information on exactly how that’s all rigged up there’s a link and in the description. And so once I had all the wires run from the RFID module to the Raspberry Pi it was time to boot up. And so once I booted up I was able to create two Python files and save them in a special RFID directory inside of the Raspberry Pi.
So the first one is called read and it does just what you might expect it to it will read the identification number on the RFID tag and so every single tag that is created has an ID number and so the read function inside of the script will print out that number and any other information that’s written to that particular tag if nothing else is written it will just print the ID so that is the script for reading the tags and then there’s a second script that is write PI and it does the same exact thing except for your writing additional information to the tag so in order for the scripts to function you need to execute them so first off I’ll just execute the read function so once you execute it it’s going to enter a like a listening mode so when it detects an RFID tag that has been tapped to the sensor it will say okay I see the tag and here’s the ID number any other additional information written to that tag and in order to write to the tag you just execute the right program and then it’ll ask you to enter a little bit of text or whatever information that you want to associate with that particular tag so for my example I just put a string of text that says it is written and then once I wrote that to the tag I was able to read that information back and output that to the terminal.
Alright guys thanks for hanging out if you want to know more about this RFID reader writer I have some more information and technical details and wiring diagrams and all that good stuff in the link below and hope you guys enjoyed this one we’re gonna be neck we’re gonna be back next time probably on home design 3d animation need things but I’d like to get back on the museum train that’d be great so we’ll see how it goes and I’ll see you guys next week peace out!
How to fix Failed to Commit Transaction (Conflicting Files) Error in Manjaro Linux
This error is usually thrown after an attempted package upgrade using either pacman, the GUI, or another package manager. Below is an example of the error:
This can happen with pretty much any package, depending on what else you’ve got installed on your system. Basically, pacman is saying it can’t go through with the upgrade because there are some conflicting files that exist on your machine that is preventing the upgrade from progressing any farther. Here’s what you can do to solve this.
Step 1
Check to see which package owns the file in question. You can do this by running pacman -Qo /path/to/the/file. If that prints out the name of a package, then you will have to decide whether or not to uninstall the package with the conflicting package by using sudo pacman -R nameOfThePackage.
Step 2
If the file in question is not owned by any package (as was the case for my situation), you can simply delete the file in conflict. You can do this by running sudo rm /path/to/the/file. Once the file has been removed, you’ll need to run the update process again to confirm that all the conflicting files in question have been resolved.
When I try to update my machine by running sudo pacman -Syyu I get an error saying it’s unable to lock the database. Below is an example:
But as you may have noticed, by removing a special database lock file, I was able to solve the issue. You can do this with sudo privileges by running:
sudo rm /var/lib/pacman/db.lck
The above method is dangerous
I’ve done this before, and it’s worked perfectly fine with no issues. But the reason the db.lck file exists is to ensure that only one program can run updates at a time. This prevents partial updates, or interrupted updates, or conflicts, or any other problems that can occur when two programs try to do the same update at the same time.
So before you go deleting your db.lck file like I did, do yourself a favor and make absolute certain that there are no other programs trying to update anything. You can use the lsof command to check what other programs are using the db.lck file. lsof is short for “list open files”.
The lsof command will either return nothing or a single number. If it returns nothing, that means that no process is currently using that file. If it does return a number, then that is the ID of the process currently using that file. In order to delete the file safely, you’ll need to kill that process first. You can do that by running sudo kill -9 <process_id>
Hope that helps! Please leave a comment below if you have any questions. You can find more information on using the command line, check out this awesome book called The Linux Command Line. It’s free!
Download the YubiKey Manager. This will allow you to modify specific properties of your key, and turn certain features on or off.
Once you’ve installed the manager, you’ll need to make sure that you have U2F mode enabled on your key.
Next, download or create a copy of a special rules file provided by Yubico. It can be found on their Github repository: https://github.com/Yubico/libu2f-host/blob/master/70-u2f.rules. Once you have the file, copy it to /etc/udev/rules.d/. If you already have a file in that directory named 70-u2f.rules, make sure that the content looks like the file from the Github repo.
NOTE: If your version of UDEV is lower than 188, you’ll need the old rules file instead. If you’re unsure of your UDEV version, simply run sudo udevadm --version in a terminal.
Save your file, then reboot your system.
Make sure you’re running Google Chrome version 38 or later. You can use your YubiKey in U2F+HID mode starting in Google Chrome version 39.
Additional Tools:
Yubico provides a proprietary 2FA authentication tool that enables use of the key with services such as Protonmail. It can be downloaded from their site.
Another tip:
If you’re having trouble getting your YubiKey to show up on Linux (I’m running Manjaro), you’ll want to make sure you’re running a service called pcscd. To run it, just open a terminal and run sudo systemctl start pcscd. Keep in mind, that will only start the daemon running. If you reboot your computer and stick your YubiKey in later, it won’t be recognized unless you start the pcscd daemon on boot. You can do this by running sudo systemctl enable pcscd. This will create a symlink to the pcscd.socket file, and it should start the daemon on boot. Once you’ve done that, you’re good to go!
June 2023 update:
Running a fresh install of Xubuntu on an Acer Chromebook, I was able to use Yubikey at Google sign-in on Firefox with zero Yubikey-specific package installs, no drivers, and largely out-of-the box. It would seem that none of the work described above is required anymore
This is a high-level overview of Blender 2.8. In this video, we take a look at:
Information provided on the Splash Screen
Navigating the 3D view
What the 3D cursor is and how to use it
Creating, editing, and manipulating objects
The Collections system and how you can use it to organize your scene
Restrict object visibility using Collections
Perspective vs orthographic views, what they are and how to switch between them
Hotkeys for changing your view quickly
How to add materials to objects
Detailed exploration of each of the panels in the ‘layout’ view
If you haven’t already, you can download the latest copy of Blender from blender.org. Please note: At the time of the this recording, Blender was in the alpha testing stages of version 2.80. However, by this time, most of the visual and back-end changes in the transition from version 2.79 had already been made. Versions 2.81 and later may have slightly different icons or menu placements, but if you’re watching this video and are brand new to Blender, those changes shouldn’t effect you that much.
If you have any problems, or would like to see and updated video, feel free to drop a comment below! All feedback is much appreciated.
In this video, we’ll cover the entire process of compiling custom versions of Blender from scratch. Why bother? Compiling custom builds can unlock special abilities and performance that’s just not possible with a standard installation of Blender.
To download pre-compiled bleeding-edge versions of Blender, check out builder.blender.org.
Blender also provides detailed documentation on how to compile Blender on all major operating systems. You can check out those instructions on the official Blender builder wiki.
To compile on Linux, you’ll first want to clone the repository in a directory of your choice. You can do this anywhere on your machine, but I’ve started compiling all my software in $ ~/Programs. This folder didn’t exist by default on my machine, so I created it just for compiling new software and packages. To clone the blender repository, you’ll need an application called git. On Ubuntu, you can install it by running:
$ sudo apt-get install git
Next, you’ll need a special collection of packages in order to build software from source code. To install these packages, just run:
The computer will prompt you for your password. Just type in your password, hit enter, and your package manager will download and install all the required packages to be able to compile Blender.
If you have any questions, please drop a comment below! To be notified of new posts in the future, sign up for the email list at the top of the page. Keep creating!
Hey guys! If you’re hear, you probably already know what RAM is and you’re excited to learn how to add swap in Manjaro. If not, this is sort of a follow-up post to how to download more RAM. Anyway, let’s get started!
Using a Swap File
There’s a ton of different ways to add swap to your system, some more advantageous than others. In my experience, it’s always been easier to add swap to an existing install by using a swap file. First, just confirm that you don’t already have swap enabled. To do this, just run sudo swapon. If that command does not return any output, then you don’t have swap enabled. Also, if you have and/or use htop, it will actually display your swap status right below your RAM usage bar. if it reads empty and 0/0kb, then you don’t have swap enabled. Great! Now we can add a swap file.
Creating and initializing a new swap file
To create an initialize a new swap file, we’ll be using the fallocate command. To initialize a 16GB swap file just run the following in a terminal:
sudo fallocate -l 16G /swapfile
then run: sudo mkswap /swapfile
Setting permissions for your new swap file
Manjaro will likely give you a warning about changing the permissions of your swap file. You can change permissions using the chmod command. The swap file should only be readable and writable by the root.
sudo chmod 0600 /swapfile
Enabling your new swap file
Enable your new swap file by running the following:
sudo swapon /swapfile
Make your changes permanent
Make sure Manjaro knows to use your swap file every single time it boots up. Do this by running:
And that’s it! The only thing left to do is reboot and just double check to see if your swap is up and running. Again, you can just run those commands from the beginning of the tutorial swapon or htop and you should be good to go! Happy blending!
In this video, we take a look at how to install Blender on Mac OSX. It is a fairly straightforward process. Blender installs just like any other piece of software for Mac OSX. Simply download the .dmg from the official releases page. Once the package finishes downloading, you can double click it to begin the install. There’s a solid chance that OSX will block the install by default. OSX complains about non-native software. Go figure. To allow the install to continue, you’ll need to open your system preferences. There is a warning under the security and privacy tab saying that the Blender installer tried to run. Simply click ‘allow anyway’ and try launching the installer again.
Next, you’ll get one last warning, and just click ‘run anyway’. The installer will appear to you. Run through all the options, accept the agreement, and you’ll be a all set.
Depending on the installer, on Mac OSX, you may not get an installer wizard. Sometimes the installer mounts itself like a disc, and pops open a window, prompting you to drag the application icon into the applications folder. If that’s the case, then just do that. In a lot of ways, that’s just easier. After you drag and drop, you should have Blender successfully installed on your machine! Hurray!
Please drop in your email up top to find out more on the basics of how to use Blender and tips and tricks to get you started. Keep creating!