There are a lot of reasons I feel this way, but this post is just my latest example.
Nice job to Benny Cornellissen:
http://blog.bennycornelissen.nl/bash-4-x-on-osx-the-bash-that-apple-wont-ship/
Also, if you want to run an updated bash shell on your Mac, then Benny's your uncle, and you'll get introduced to Homebrew (the missing package manager for OS X), which you should be using anyway. Seriously.
Geek Fu
Monday, March 28, 2016
Thursday, February 4, 2016
Oracle primary key creation in parallel
After an earlier post, Saving a LOT of time with Oracle foreign key creation in PARALLEL, I was asked about primary key creation in parallel. It suffers from the same issue, and can be really time-consuming if you need to drop and recreate a primary key, either after a new load, or if you are partitioning an existing primary key.
There are 2 similar but slightly different approaches that I have tried and used. The first, more straightforward approach is to create a unique index on the column you wish, in parallel, and then reference that index in the primary key creation clause, with USING INDEX. The second involves creating the primary key in a disabled state, creating an index in parallel using whatever definition you want, and using the same name as the constraint, then enabling the constraint. In both cases, you need to turn off the PARALLEL option with an alter index statement, or you will get behavior you might not want.
A simple example of the less obvious second option follows:
Why would you want to do this instead of the more obvious approach of creating a stand alone index? Well, for one thing, you can keep the names the same, which is pretty cool. This can help keep standards in line, if you normally have a 1-1 match between the constraint name and the index name.
There are 2 similar but slightly different approaches that I have tried and used. The first, more straightforward approach is to create a unique index on the column you wish, in parallel, and then reference that index in the primary key creation clause, with USING INDEX. The second involves creating the primary key in a disabled state, creating an index in parallel using whatever definition you want, and using the same name as the constraint, then enabling the constraint. In both cases, you need to turn off the PARALLEL option with an alter index statement, or you will get behavior you might not want.
A simple example of the less obvious second option follows:
alter table my_foo drop constraint pk_my_foo;
alter table my_foo add constraint pk_my_foo primary key(id) disable;
create unique index pk_my_foo on my_foo(id) global partition by hash(id) partitions 8 parallel;
alter index pk_my_foo noparallel;
alter table my_foo enable primary key;
Why would you want to do this instead of the more obvious approach of creating a stand alone index? Well, for one thing, you can keep the names the same, which is pretty cool. This can help keep standards in line, if you normally have a 1-1 match between the constraint name and the index name.
Wednesday, July 29, 2015
Stop calendars added on Google calendar from appearing or sending notifications on your phone
A quick shout out to Ben Rimes (@techsavvyed) for posting a quick walk through for removing the endless spam of notifications on an iPhone or Mac when also using multiple google calendars on their calendar. It is a massive pain in the butt, and should be an exposed and obvious setting someplace, not a special web page you have to find the URL for.
See his blog post at techsavvyed.net for a video walk through, and the actual URL: google.com/calendar/syncselect.
Hope this helps others!
See his blog post at techsavvyed.net for a video walk through, and the actual URL: google.com/calendar/syncselect.
Hope this helps others!
Tuesday, May 26, 2015
Increasing Swap Space by Stealing from /home
Recently, I was installing Oracle 12c on Red Hat Enterprise Linux 6. This should have been easy, right? In this case, I did not install the OS - my IT guy did, and he didn't know that Oracle has a minimum swap size requirement. And of course, I didn't remember to tell him. So... as I was checking prerequisites, I find that I was below the minimum on swap space. I needed 16G, and only had 4G.
What to do? Well, the simplest approach is to steal home. Or actually, steal some space from /home.
In my case, I had about 4G of swap space, and I had about 224G of space in the /home partition. With a default install of RHEL6, the filesystems are in volume groups, so instead of using fdisk, we get to use lvm commands, which is really pretty cool.
The short and simple approach follows. Note, this is not a well written guide with examples and output and highlights. It is a quick and dirty cookbook for someone who knows what they are doing. I might expand on this later... or not. Log off all existing users, and login as root. In a bash shell, run the following, changing the paths and sizes for the specifics for your install.
Preliminary Information
vgs
Get the name and size of the volume group.
lvs
This will show the size of the various logical volumes in the volume group
mount
Show the paths of the mounted filesystems. Take the base of the home path, and find out the other paths.
fdisk -l /dev/mapper/vg_node*
This will give us the other similarly named partitions, which will give us the swap partition path.
Reducing the Size of /home and the Logical Volume
umount /home
Yep. It unmounts /home so you can mess with it.
e2fsck -f /dev/mapper/vg_node2-lv_home
This checks the volume, and is required before a resize.
resize2fs -p /dev/mapper/vg_node2-lv_home 210G
This resizes the ext4 volume to 210G, which added enough space for the swap space I needed.
lvreduce -L 210G /dev/mapper/vg_node2-lv_home
I use the exact same size specifier here, so I don't destroy the filesystem.
e2fsck -f /dev/mapper/vg_node2-lv_home
Not strictly required, but it really makes me feel better to do this.
mount /home
We should have a working home volume now.
lvs
Show use the new size of the volumes. In this example, /home should be 210G.
Increasing the Size of swap
cat /proc/swaps
Check out the size of our existing swap
swapoff /dev/mapper/vg_node2-lv_swap
Turn it off so we don't mess with it.
vgdisplay
This will show you the number of free extents, which we pass to the -l param below.
lvextend -l+3682 /dev/mapper/vg_node2-lv_swap
We extend the volume by as much as we had free (in this case 3682 extents).
mkswap /dev/mapper/vg_node2-lv_swap
We recreate the swap space using all available space.
swapon /dev/mapper/vg_node2-lv_swap
Turn on the swap space!
cat /proc/swaps
Verify that the new swap space is being used.
lvs
Display the size of the logical volumes.
Saturday, February 7, 2015
Saving a LOT of time with Oracle foreign key creation in PARALLEL
One of the things that can consume a ton of time when doing DB maintenance is recreation of foreign keys. You can't drop and recreate an index when a foreign key points to it. So, if you are dropping a unique index or primary key, and you get an error telling you that you can't do that, here are the steps to take:
First, find the foreign keys that point to the index. Let's say you're dropping the primary key on a table, perhaps to hash partition it. If the table (parent_table) has a column named id, you can query the USER_CONS_COLUMNS view with the table_name and column_name of the table you are maintaining. It will show you the name of the constraint (and owner). If this is a system generated constraint name, and that isn't enough to identify the source table, you can join that constraint name against user_constraints and get more info.
Example:
select * from user_cons_columns where table_name = 'PARENT_TABLE' and column_name = 'ID';
So, drop the foreign key causing the problem:
ALTER TABLE child_table DROP CONSTRAINT fk1_child_table;
Do your work on the parent table:
-- get rid of the old primary key
ALTER TABLE parent_table DROP CONSTRAINT pk_parent_table;
-- replace with a performant partitioned primary key
ALTER TABLE parent_table ADD CONSTRAINT pk_parent_table
PRIMARY KEY(id) USING INDEX GLOBAL PARTITION BY HASH(id) PARTITIONS 64
TABLESPACE tblspc1;
Time to add the foreign key back... But wait! If these are huge tables (a strong possibility if you are partitioning them or their indexes!) it can take a LONG time to recreate a foreign key. Most of that time is spent doing validation, and it happens serially. There is a really easy way to create that foreign key in a few steps and save yourself a tremendous amount of time!
I ran across this idea in a post here, so I can't claim credit for it, but I do use it in different scenarios which might lead others to find it. It was a great post, and I plan to use it a lot more.
It's very simple - you defer validation, which you CAN do in parallel. Simply recreate the foreign key, like so:
ALTER TABLE child_table ADD CONSTRAINT fk1_child_table FOREIGN KEY (parent_id) REFERENCES parent_table(id) ON DELETE CASCADE ENABLE NOVALIDATE;
Of course, now you have a foreign key that needs validation. Very simple, but we want to do it in parallel, so our multi-hundred gigabyte table can be done quicker. First, turn on PARALLEL DDL for the session:
-- we can't go faster if we don't turn on parallel!
ALTER SESSION ENABLE PARALLEL DDL;
-- temporarily make the child table parallel enabled:
ALTER TABLE child_table PARALLEL;
-- prepare to wait a while on this, but you can entertain yourself
-- by watching your validation go massively parallel in Enterprise Manager :-)
ALTER TABLE child_table MODIFY CONSTRAINT fk1_child_table VALIDATE;
-- once the validate FINALLY finishes, be sure to turn off the parallel option, unless
-- you know what you are doing and really, really wanted it on.
ALTER TABLE child_table NOPARALLEL;
I should have tried this on the other operations as well, but that's for next time!
Thursday, September 26, 2013
Props to David Latham: Apache and SELinux
So, I was working on a CentOS 6.4 box to set up an SVN server. Early in the process, I decided I would try (for the first time) to successfully live with SELinux and its requirements, hoops and other arcana. Generally I just turn the POS off to allow me to admin my Linux boxes (which are usually behind multiple levels of fires, etc. etc. etc.). This time, I wanted to try to live with it for a while.
Things were going great until I realized I needed to use some space assigned to /home to host some data. So, I setup a directory there (using Location in a an Apache .conf file). I restarted apache with
Well. Well. So, went through the normal hoops - checking for permissions and usernames and lions and tigers and bears oh my. No joy. However, I saw a reference somewhere that SELinux might be causing the problem (which I kinda expected) and that I could test that proposition with
Of course, the whole reason for this charade was to try to live with SELinux (even though we use trivial passwords behind the firewall) as a learning experience. Hmmmmph. I was starting to regret this already. So, I used
A few googles away, I found the link below, which let me resolve the problem. The magic incantations I recited included the following two lines:
I was able to view the current extended attributes for SE by using:
I'll include the entire blog post as a help to others, but there was no way I was gonna chmod 777 on the directories to make this work! Now, if I could just figure out WHERE those magic incantations come from, without having to learn 1,000 pages of arcana that don't really matter.
David Latham: Allow httpd ( apache ) to write to files and folde...: You may have read my previous post about configuring apache for public_html with selinux. Now today we look at extending this a little wit...
Things were going great until I realized I needed to use some space assigned to /home to host some data. So, I setup a directory there (using Location in a an Apache .conf file). I restarted apache with
service httpd restart
and launched my browser... and got a 500 error.Well. Well. So, went through the normal hoops - checking for permissions and usernames and lions and tigers and bears oh my. No joy. However, I saw a reference somewhere that SELinux might be causing the problem (which I kinda expected) and that I could test that proposition with
setenforce 0
. Viola'! It worked!Of course, the whole reason for this charade was to try to live with SELinux (even though we use trivial passwords behind the firewall) as a learning experience. Hmmmmph. I was starting to regret this already. So, I used
setenforce 1
to turn the beast back on, and set off on my quest.A few googles away, I found the link below, which let me resolve the problem. The magic incantations I recited included the following two lines:
setsebool httpd_enable_homedirs true
chcon -R -t httpd_sys_content_t .
I was able to view the current extended attributes for SE by using:
ls -alZ
I'll include the entire blog post as a help to others, but there was no way I was gonna chmod 777 on the directories to make this work! Now, if I could just figure out WHERE those magic incantations come from, without having to learn 1,000 pages of arcana that don't really matter.
David Latham: Allow httpd ( apache ) to write to files and folde...: You may have read my previous post about configuring apache for public_html with selinux. Now today we look at extending this a little wit...
Thursday, September 5, 2013
Cool page talking about Linux RAM usage
So, first, I've been using Linux for a really, really, really long time. Like... 1994. I had to hack a SCSI driver to get my Adaptec 2940W working. Yeah, I know - bragging, right?
I've been using top for even longer - it was a good friend on AIX, HPUX, etc. And I've certainly peeked at /proc/meminfo a few times. But something I never really new existed until recently was a cool command line tool called "free". And more to the point, I never knew something really, really basic about memory usage on Linux.
I stumbled across this page, and had to smile at the title:
Linux Ate My RAM!
It definitely opened my eyes about something I hadn't spent much time thinking about, so - thanks!
free -m... it's your friend.
I've been using top for even longer - it was a good friend on AIX, HPUX, etc. And I've certainly peeked at /proc/meminfo a few times. But something I never really new existed until recently was a cool command line tool called "free". And more to the point, I never knew something really, really basic about memory usage on Linux.
I stumbled across this page, and had to smile at the title:
Linux Ate My RAM!
It definitely opened my eyes about something I hadn't spent much time thinking about, so - thanks!
free -m... it's your friend.
Subscribe to:
Posts (Atom)