UTI ITSL – Data Disclosure through a single key

NSDL and UTI are two bodies under the Indian Government which are the official PAN Card service providers. Recently I had the privilege to take services for PAN Updation through UTI ITSL.

After waiting for some time for the processing of my card, I went to the website of UTI-ITSL for checking the status. I entered the application number, and instantly got the status of my query. Cool!

As a fuzzer, in the form-field for ‘Application Coupon Number’, I entered the next number (my appln number + 1). And yes, it gave the results. Entered some more numbers in the sequence, got results for each query. I could get results for applications as early as 2011. This means that if someone runs a tiny script to scrape data of applicants for the last 8 years, they can easily get the details – Full name, PAN Number, Application Number.

Details

Name, PAN No, Courier Tracking Details

As shown in the above image, all these details are visible to everyone without any kind of authentication, you need to just input a 9-digit application number.

And there is something more to that – you can look for the PIN Code and City of the applicant, through the Courier Tracking Number:

Post Office Track

This PAN Card was delivered to some guy in RANPUR (Gujarat) on 09-03-2017, most probably he lives there

If you are more lucky, you will get the birth-date and spouse/father’s name of the applicant:

Mismatch 1

For the above applicant, he is having name mismatch between Income Tax Department’s Data and the data provided in the application. So which fields are required to be shown to the applicant – only the field which is having some conflict, right? No, even if the DOB which is totally irrelevant in this case of name mismatch, it is shown. Proof below:

Mismatch 2

In case of Name mismatch (field highlighted by pink by the UTI guys), Father Name and DOB are also displayed

With some modification in the script to scrape all this data, we can fetch the DOBs for all the people who are having such mismatch in their application. Later through correlation, we can get the below details for a single applicant:

  1. Applicant’s full name
  2. Applicant’s Father’s full name
  3. Applicant’s DOB
  4. Applicant’s PAN Number
  5. Applicant’s PIN Code and City

This can count as a huge flaw in the design of their application which gives such golden data with very less efforts, and exposes the PII of millions of applicants.

Some suggestions for UTI developer guys:

  • Randomize the application numbers, if possible, and
  • Please do not allow anyone to query your database with a single key. At-least use two keys (e.g. 1. Application Number & Date – Time of application, 2. Application Number & UID Number)
  • Don’t provide the status if it has been a month after the PAN card is received by the applicant

 

(I tried to contact the people at UTI ITSL: their email (utiitsl.gsd@utiitsl.com) bounces back, no-one picks up the phone, and for snail-mail I don’t have the postal stamps)

Eti.

 

 

Advertisements

Remote logging with Rsyslog

RSYSLOG is the rocket-fast system for log processing. After syslog, now rsyslog comes pre-built with the Linux systems, meant for local and remote logging.
In any system, you will want to (a) log the system and application logs on the local machine, and/or (b) log the system and application logs to a remote machine.

Below given are 2 cases, useful for forwarding OS logs and application logs:

  • Forwarding only OS logs:

Add the below given line at the bottom of the /etc/rsyslog.conf file, and later restart the rsyslog service-

*.info;authpriv.*;cron.*;mail.*     @remote_ip:514

By default, rsyslog uses port number 514 for its activities. If the logs need to be forwarded through UDP, mention a single '@' before the remote_ip, and for TCP, mention '@@' before the remote_ip.

*.info – all logs with info severity

authpriv.* – all logs related to authorization and privileges

cron.* – all logs related to cron – scheduled jobs

mail.* – all logs related to mail and mail servers

  • Forwarding OS and Application logs:
# Add the following module - it is the module for forwarding logs from a file.
# Add this along with the other $ModLoad tags at the top of the file
$ModLoad imfile
# Add 'local7.none' to the below line as shown below.
# This will stop the logging of local7 messages in /var/log/messages, as we need to forward our application logs through local7 service
*.info;mail.none;local7.none;authpriv.none;cron.none                /var/log/messages

# Comment the local7 for boot logs, to stop logging the application logs to /var/log/boot.log which we are forwarding through local7 service
#local7.*                                               /var/log/boot.log

# Add the below lines to forward the logs from their respective files. First 3 lines are variable, the other 2 are static.
# $InputFileName takes the path to log file (absolute path of the file)
# $InputFileTag will attach the mentioned tag (here: tag_jio.com) to the original log
# $InputFileStateFile is the State file where the logs are stored before forwarding (for eg. useful in case of network failure)
$InputFileName /path/to/log/file
$InputFileTag tag_website.com:
$InputFileStateFile buffer_file_name
$InputFileFacility local7
$InputRunFileMonitor

# Add this line at the bottom of the file, for forwarding
# local7.* (all logs of local7 - application),
# *.info (all logs with info level),
# authpriv.* (all logs of authorization-privilege) and
# cron.* (all logs of cron)
# - to the receiver IP and Syslog port 514.
# Add '@' for sending logs through UDP, '@@' for TCP.
local7.*;*.info;authpriv.*;cron.* @receiver_IP:514

(Above given configuration is for Red Hat based systems only. It may differ in Debian based systems.)

Common Troubleshooting Steps:

  • Check network connectivity between the sender and receiver – Firewall port opening (Port: 514 – TCP/UDP), Ping, Traceroute
  • Check if logs are present at the mentioned log file path
  • Check the ‘space’ and ‘semicolon’ in the rsyslog configuration file
  • Change the $InputFileStateFile’s value to something else (eg. change buffer_file_name buffer_file_name to buffer_file_name_1)
  • Restart the rsyslog service

SSL/TLS and Your Browser

SSL in Browser

 

 

SSL/TLS provides an extra layer of security to the HTTP, making it HTTP Secure (HTTPS). It works on the Application Layer (OSI Model) along with HTTP. HTTPS is not a different protocol, but the underlying HTTP with implementation of SSL/TLS for security.

Public Key Infrastructure and Certificate Authorities are used for making it possible.

How HTTPS works?

Short Version
Just like the TCP Handshake, a handshake happens in SSL between the server and the client. We can break this handshake into three steps: Hello, Certificate exchange and Key exchange.

Hello

The client sends a Hello message and the server responds with its Hello message. These messages contain information like the SSL version supported, cipher suite and some random data for key generation.

Certificate Exchange

To provide its authenticity, the server has to send its SSL certificate to the client. On receiving the certificate, the client checks whether its verified and trusted by some Certificate Authority, and takes the decision accordingly. For some sensitive applications, the server can ask for a certificate from the client too.

Key Exchange

A symmetric key is exchanged between the two parties. The client computes a key, encrypts it with the server’s public key, and sends it to the server. Only the server can decrypt it, by its own private key. All the communication then takes place encrypted with this symmetric key.


Long Version
Client Hello

After the TCP connection is established, the clients starts the SSL handshake. The important data in the Client’s Hello message includes:

  • Version Number (eg. SSL 2.0, SSL 3.0, TLS 3.1)
  • Random Data (which is later used with the Server’s Random Data to generate a secret key)
  • Cipher Suite (the list of cipher suite available with the client, which includes – the protocol version, the algorithm for key exchange, the algorithm for encryption, and a hash function)

The Client Hello message can be:

ClientVersion 3,1
ClientRandom[32]
SessionID: None (new session)
Suggested Cipher Suites:
   TLS_RSA_WITH_3DES_EDE_CBC_SHA
   TLS_RSA_WITH_DES_CBC_SHA
Suggested Compression Algorithm: NONE

Server Hello

The Server responds with its Hello message, and some of its fields are:

  • Version Number (The highest version which both of them – server & client support)
  • Random Data (which is later used with Client’s Random Data to generate a secret key)
  • Cipher Suite (the strongest cipher suite which both server & client support is chosen by the server. If there is none, the session will be ended with ‘handshake failure’)

The Server Hello message can be:

Version 3,1
ServerRandom[32]
SessionID: bd608869f0c629767ea7e3ebf7a63bdcffb0ef58b1b941e6b0c044acb6820a77
Use Cipher Suite:
TLS_RSA_WITH_3DES_EDE_CBC_SHA
Compression Algorithm: NONE

Along with the above mentioned details, the following steps take place in the Server Hello message:

  • The server sends its digital certificate to the client, which has the server’s public key
  • Server creates a temporary key to the client
  • Server asks the client for its certificate, to validate the client’s authenticity
  • End of hello, meaning the server’s Hello message is done, and client can respond

Client Response

After getting the server’s Hello Done message, client starts talking. It sends the necessary messages in the below mentioned sequence:

  • Client certificate – contain’s the client’s public key, to provide its authentication at the server
  • Client Key exchange – the most important part of the communication. The client computes a premaster key from both the random values previously exchanged. This key is then encrypted by server’s public key before sending it, so that only the server can decrypt and get out the original key with its private key.
  • Change cipher spec – all the further messages will be encrypted using keys and algorithms negotiated
  • Client Finished – is the hash of the entire conversation. This is the first message which is encrypted and hashed for the session.

Server Final Response

This is the final message in the conversation between the server and the client to have a secured connection. The server’s final response will have:

  • Change cipher spec – will notify the client that the server will start encrypting the messages with the negotiated keys and algorithms
  • Server Finished – is the hash of the entire conversation to this point. If the client can decrypt this message and validate the hashes, it means that the SSL/TLS handshake was successful.

After the SSL/TLS handshake is done, further communication is secure between the server and the client.


Example

A representation of how your browser starts a HTTPS connection with website example.com-

  • Firefox (your browser, for example) connects with the server of example.com with HTTP and asks for the login page which uses HTTPS
  • For the communication, the server sends Firefox a certificate, which contains the server’s public key
  • Firefox verifies the public key of the server from the certificate
  • Firefox chooses a random symmetric key and encrypts it with the public key of the server
  • On receiving the encrypted message, the server decrypts it with its private key. Nobody else on the network who has received the encrypted message can decrypt it, because they don’t have the server’s private key. Now the server has the symmetric key with it
  • Every time Firefox wants to send something to example.com in a secured manner, it will encrypt it with the symmetric key. On the other end, the server will decrypt it with the same key

Every website/server which wants to implement HTTPS (i.e. SSL/TLS security) has to buy SSL certificates from authorities like VeriSign, Comodo, etc. Many websites implement HTTPS part only for some important pages (like login or payment) and other parts of the website work on simple HTTP. Implementing HTTPS for the whole website is not much costly, but the CPU overload increases in processing the requests. Hence many website owners keep away from HTTPS because of the cost factor or the overload factor. Recently Google announced that it will reward the HTTPS webpages with a higher ranking in its search results (source).

 

Why not use asymmetric key encryption for the handshake?

There’s an answer on StackExchange. (1) Asymmetric encryption is much slower compared to symmetric encryption, (2) For the same keylength, asymmetric is weaker compared to symmetric encryption.

What an attacker can see if you are using SSL/TLS during your connection?

If you are using SSL/TLS correctly, the attacker can interpret only some of your data. That includes – the domain you are connected to, the related IP address and port numbers.

For example, if you are doing a Google search using https, the URL in the browser will be: https://www.google.co.in/?gws_rd=ssl#q=what+is+https, and you can see the full URL. But on your cable, only the domain name google.co.in is sent to the DNS for domain name resolution, instead of the full query/URL. Hence, you can say that HTTPS hides your full URL, only the domain name is revealed.

HTTPS provides confidentiality of data, but not anonymity of who is sending / receiving the data.

This interactive image by EFF provides clear understanding of what can be seen by the eavesdroppers while you are using HTTPS and while you are using Tor.


(References: SSL/TLS in Detail, An answer at StackExchange)

Snort on DSL connection

I was proficient with working on Snort on my eth0 connection during my previous Ubuntu installation. Later, I changed to Fedora, and eth0 was replaced with eno1. And the other change – I started using a direct DSL line, which used the ppp connection.

Now while doing ifconfig for the DSL connection, I get the interface as ppp0 instead of eno1.

ifconfig - ppp0

 

The limitation with Snort is that it will consider only the ether packets, ignoring the ppp0 connection. Even when I am using the ppp0/DSL connection through my Ethernet port, the connection is not through eno1.

If you try starting the Snort instance with the command

# snort -c /etc/snort/snort.conf -l /var/log/snort/ 

it will give the following error:

ERROR: Cannot decode data link type 113
Fatal Error, Quitting..

Snort initial error

If you try looking for the error, you will get a variety of solutions. If your snort version is 2.9.6.1, none of them are going to work for you. The reason is – they have stopped supporting the –enable-non-ether-decoders.

If you put that argument with your command for igniting Snort, you will be provided with a list of available arguments, but –enable-non-ether-decoders will not be allowed. I was furiously looking for a solution regarding this problem. After going through some forums, it came to my mind to try a walk-through.

The easiest option available was to make Snort work with the ppp0 connection (which is plugged in to eno1) work with eno1.  You have to try giving the command with an additional argument, which is -i eno1:

# snort -D -i eno1 -c /etc/snort/snort.conf -l /var/log/snort/

This will start the Snort Daemon on the eno1 interface, capturing all the packets and dumping them to your desired location. The logs will be located in files named snort.log.xxxx. For every instance there will be a new log file, which has the packets logged in Binary PCAP format to be readable by Wireshark, Snort, or other similar applications.

Snort Logs

If you try to read these logs with some text reader/editor, it will be like reading the Webdings fonts. Don’t do that. Snort has a better reader, also called Snort -r.

Give the command:

# snort -r snort.log.1405955899

This will give you a nice analysis of the packets with all the logs available to you. You can also export the readable content to a .txt file by the normal methods.

Snort -r Output

Choose the rules very wisely which you are applying for Snort. As this was for a test environment, I implemented all the available rules to the scenario; and that gave me 5 MB of logs when I ran Snort for just 25 seconds. You need to cut that down, Roger!

Parsing and getting the required information from these logs is one more task. Have you tried Splunk, lately? Here: http://apps.splunk.com/app/340/

 

TL;DR list your interface as eno1 even if you are using a ppp0 connection

Evading mod_evasive on Apache

These days, the server mostly used is either Apache or Nginx (ref: Netcraft). For Apache, there have been several security tips and a few modules for providing security. One of them is mod_evasive. If you refer basic server hardening tips, they would have recommended to install mod_evasive to secure your Apache against Denial of Service attacks. mod_evasive comes with some default settings which are not needed to be played with if you have a general purpose website.

How mod_evasive works:
(ref: /var/httpd/conf.d/mod_evasive.conf)

DOSPageCount, default: 2 – in 1 second

This is the threshhold for the number of requests for the same page (or URI) per page interval. Once the threshhold for that interval has been exceeded, the IP address of the client will be added to the blocking list.

DOSSiteCount, default: 50 – in 1 second
This is the threshhold for the total number of requests for any object by the same client on the same listener per site interval. Once the threshhold for that interval has been exceeded, the IP address of the client will be added to the blocking list.

DOSBlockingPeriod, default: 10
The blocking period is the amount of time (in seconds) that a client will be blocked for if they are added to the blocking list. During this time, all subsequent requests from the client will result in a 403 (Forbidden) and the timer being reset (e.g. another 10 seconds). Since the timer is reset for every subsequent request, it is not necessary to have a long blocking period; in the event of a DoS attack, this timer will keep getting reset.

Explanation: If an IP address requests a page more than 2 times in 1 second, or requests an object more than 50 times on the same listener in 1 second, the IP address will be blocked. It will be blocked for 10 seconds and all the requests during that time will be resulting into 403.

What I did:

  • Copied a website and all its objects using ‘wget’ and hosted the website from its source on my Apache server in the folder /var/www/html/
  • Created the below Py script to get a HTTP Connection to the server and GET the requested object.
  • lst is the list of site objects which were to be accessed using GET.
  • It randomly requests an object from the given list, avoiding repetition.
  • The same script used for all the 3 tests.

#####Start of Script#####

import httplib
from random import choice

lst = [‘/april.html’,’/august.html’,’/company-profile.html’,’/contact.html’,’/december.html’,’/february.html’,’/index.html’,’/inquiry-form.html’,’/january.html’,’/july.html’,’/june.html’,’/march.html’,’/may.html’,’/november.html’,’/october.html’,’/september.html’,’/services.html’,’/tide-table.html’,’/images/ani.gif’,’/images/back.jpg’,’/images/icon.gif’,’/images/banner.gif’,’/images/slogan.gif’]
n = 1
while True:
i = choice(lst)
httpServ = httplib.HTTPConnection(“127.0.0.1”, 80)
httpServ.connect()
httpServ.request(‘GET’, i)
response = httpServ.getresponse()
if response.status == httplib.OK:
print str(n) + ” Received “+i
httpServ.close()
n+=1

#####End of Script#####

Checking mod_evasive with default settings, requesting from the same machine (localhost):

Server: Apache 2.4.6
OS: Fedora
Client: Fedora, Python script

>>

Localhost 403

Running this script on the Fedora (localhost) machine causes the temperature of the machine to rise till 87 degree Celsius (The processing was Ctrl+Zed to avoid over-heating, as the point was proved). mod_evasive will definitely stop serving this script as soon as it will find that it is exceeding the threshold, but it will continue returning 403 to the script. The 200 response will stop and 403 will start; Apache will continue processing and serving 403 to the script. So what is the use of mod_evasive? Mod evasive is built for protecting against the DoS attacks, but here mod_evasive is the victim. It continues the processing the this keeps the busy and the single script will provide enough load to the server.

Checking mod_evasive with default settings, requesting from a Windows machine:

Server: Apache 2.4.6
OS: Fedora
Client: Windows 7, Python script

>>

Py script running on Windows
The same thing which happened from localhost will occur while sending requests from a Windows machine. After some time Windows will show that either it lacked sufficient buffer or the queue was full.

Forbidden (Windows)Error in Windows

Checking mod_evasive with default settings, requesting from a Linux machine:

Server: Apache 2.4.6
OS: Fedora
Client: Kali-Linux_x86, Python script

>>
Story continues here. Testing from a Kali-Linux, running the same py script, will DoS the Apache server. The main task was to flood the Apache server which was using the default configured mod_evasive module, and it was accomplished. Mr Mod_evasive, what is the meaning of sending 403 to the blacklisted IP every time? It does totally reverse, clogging the server and giving very less time for other client requests.

Kali 403

One more trick is to request for a non-existent object (eg. /hello-admin.html), and hence the server will be busy responding with 400 Not Found. We just need to keep the server busy with our requests, and this tiny-simple script does it all.

400

In the below screenshot it can be seen how much processing is done by apache/httpd while processing for the single script.

top results, high usage by apache

Here it can be seen how the temperature rises by 20 degrees in just 1 minute:

Temperature rise - ITemperature rise - II

In Plain Text: Using mod_evasive with default settings is of NO use as it does not stop serving the DoSing client but just responds it with a 403. The processing remains the same (kind-of).

Common problems during initial Honeyd configuration

Honeyd is a small daemon for Linux (now also available for Windows) to simulate multiple virtual hosts on a single machine. It is a kind of an interactive honeypot. The latest release can be downloaded from Honeyd release page.

For my project, I have been working with honeypots, and Honeyd is one of them. During the initial stage, I faced some problems while starting the basic setup of some personalities with Honeyd. Here I recall those problems and some misconfigurations which can result in errors (mainly: config file parse error) and can be a problem for first time users.

The command to start honeyd daemon through your terminal is:
# honeyd -d -f honey.conf

Here, honey.conf is my configuration file and -f is used for pointing to that file. -d is used to tell the machine to run honeyd as a daemon.

eth0 not an IP

1

Reason: Your ethernet connection does not have an IP address.

When you are testing on a single machine, the first thing you need to do is give your interface an IP address. The below command will take care of it. Replace ‘eth0’ with your respective interface.

# ifconfig eth0 192.168.1.1
(If you are using a different interface like eth2, you need to mention while starting honeyd. Should be -i <interface>, example -i eth2)

Now, here is my sample configuration file:

2

Lets disect the file line by line.
1: creates a personality, and we will refer to it as windows.
2: name the personality as Windows XP, means someone who is scanning our honeypot will find it so.
3: including the ftp.sh file, which will simulate a FTP server.
4,5,6: opens the tcp ports 135, 139, 445.
7: bind the ip address to our personality.

Try running the honeyd while using our honey.conf file. Error?

parsing configuration file failed

Now, during my initial day I had taken help for the FTP server from a blog on linux.com, “Weekend Project: Use HoneyD to fool attackers“. As it is a tutorial on linux.com, there are more chances that this post will be on top of your Google search for HoneyD on Linux. My point is, they have simplified the process of configuration, explained well, but there is one small error. I have highlighted it in the below screenshot:

3

The error that you will get will be: parsing configuration file failed. It will be on line:3. Set is used for setting our personality to some predefined condition, while add is used to provide something extra. If you are using set for providing preloaded scripts, then surely you will face parsing error.

Solution: replace set with add.
This should be your configuration:

4Now, your honeypot will start its work without any error. Time to rejoice? Kind of.

Logging

How to log any attacks or scans on your honeypot? Use -l <filename>. Normally, it is logged under a directory named honeyd under the /tmp directory. If you dont have that directory, create it with mkdir.
The command I used for logging the attempts was:

5Ah, permission denied!
How to solve this? You guessed right – the file is write-protected, and hence give the write permission to everyone. Use chmod command.
# chmod 766 /tmp/honeyd/log

Can’t detect Ping?

As you’ve seen the configuration file, I have not yet given any MAC address to my honeypot. Hence, it is not yet accessible to the outside world. Try pinging from a different computer, it will fail.
Provide a MAC address to your honeypot with the line as shown in the below screenshot. Check the MAC address of your host machine, and give the address of your honeypot as near as possible to the host address.

6

It is good if you have given the personality name as “Microsoft Windows XP Professional SP1”. If you have given a name like Windows XP (like I have given, in the below screenshot) or Linux Ubuntu 13.10, you are prone to getting an error while parsing the configuration file.

7

8

There are conventions for naming the personalities. There is a list of fingerprints (or names for such personalities) which should be used for naming the honeypot personality. The fingerprints are located in nmap.prints file. It uses the fingerprints which are identified by nmap scan, and hence when someone is scanning the honeypot, they will get the name provided by you.

Locate the nmap.prints file, with locate command. The you can use more to view the whole file, or if you simply want to view the fingerprints, use the grep command as shown in the below screenshot: (ref: Honeyd FAQ)9

You can use any of the personalities in the list displayed by the above command.
While sometimes, there is a need to specify the fingerprint file on the command line. The command should include -p <fingerprint.file>
# honeyd -d -f honey.conf -l /tmp/honeyd/log -p /usr/share/honeyd/nmap.prints

Again, start your honeypot with a new personality.
Ping the honeypot from a remote machine. It will log everything, along with displaying it on the console.
Try doing FTP to your honeypot. It will show you the FTP login screen. (As usual, anonymous login is not allowed!)
Let me know if you face any other problems in configuring your honeypot.

Conclusion: HoneyD is very easy to work with, and hence the choice of many. But some common mistakes like typo or proofreading can bug you till infinity. You mostly need to take care with the initial configuration.
Adios!

Snort on Debian

Snort, is an Intrusion Detection and Prevention System for Windows and *nix machines. You can download it from here: Snort Download.

Well, for debian we dont require to download it from there. The command to download and install it is-

# apt-get install snort

This will download and install Snort to your Debian.

Next step is to configure the Snort for generating alerts for any activity.  For example, we can consider ICMP-ping requests for alerts. Whenever someone pings our machine, an alert will be logged.

For configuration, 3 directories are necessary. If they are not created on their own, create them with mkdir command. They are:

/etc/snort

/etc/snort/rules

/var/log/snort

Now, our configuration file is: /etc/snort/snort.conf

If you need, you can take a backup of the original file, and then create a new file and edit it as below:

include /etc/snort/rules/icmp.rules

We don’t need to add other lines, as right now we are considered about only the ICMP requests, we will configure only the icmp.rules file and hence it is referenced in the snort.conf  file.

Now, the icmp.rules file contains the below content:

alert icmp any any -> any any (msg:”Hey, someone pinged!”; sid:477; rev:3;)

This line will log any ICMP request from any source, with the given message. The sid and rev are used to uniquely identify Snort rules and its revisions.

Now, to start Snort listening on interface eth1, the command will be:

snort -c /etc/snort/snort.conf -l /var/log/snort -i eth1

The first location is where the Snort configuration file is located, while the second location with -l is where to store the alert, and -i provides the interface selection.

Now, ping the machine from some other machine, and you will find an entry in the alert file located in /var/log/snort. It will contain the source and destination IP addresses, the time and date of the incident and other information related to the query.

Similarly, you can configure Snort to generate alerts on various incidents like FTP login, SSH attempts, Telnet requests.

Snort Configuration for ICMP

WordPress Brute-Force Attack

Wordpress attack

Apparatus:

Distributed botnet, around tens of thousands of bots with their respective IP addresses
A pass file of around 1000 entries with some normal passwords
Default username: ‘admin’

Steps:

  • WordPress 3.0 release before 3 years, users going on with ‘admin’ as their default username, and some usual password
  • A brute-force with username: ‘admin’ and password from the above mentioned file
  • The botnet, tries this attack on each and every wordpress portal available over Internet

Objective:

A well-planned distributed attack (just like itsoknoproblembro shook the banking world) against some hot-spot over the Internet.

How:

The wordpress web servers have very high bandwidth, practically unlimited. Any attack triggered from these servers will have a great impact. This can be done to create a better and huge zombie-net.

Conclusion:

Save your wordpress! Change your password if the username is admin (and also, you need to change the username from admin to something else, for being secure).

Some more tips:

If you are using the .com for your wordpress, change your password and enable the 2 step authentication.

If you are the admin of wordpress installation on your server, you have some more steps to follow – like creating a password for the .wpadmin file and some security modifications in the .htaccess file.

More description for making these changes is available here: Hostgator Support for WP Attack

Why is it necessary to keep your email secure?

Apart from the normal reasons for keeping our email accounts secure, there are many more which we try to ignore, or are not aware of the possibilities.

Take this scenario – why to keep the work-related and social email accounts seperate and confidential (if possible) :

If someone knows the basic information about you, your social networking account can be hacked. The main ingredient is – your email id. Its better to keep the id secure which you are using for networking. If the work and social email ids are the same, there are more chances of people guessing-knowing your basic informations, providing more chance for your account to get compromised.

I just wanted to let you know – that nobody is secure.

Some minutes back, I received a DM on my twitter by a friend. The DM contained –

     Did you see this pic of you? lol bit.ly/YqGEju

And, it was from a girl who’s in the network security field since 12 years. Clearly, her account was hacked, and the victim account was used to send DMs to get some more accounts.

The result of clicking on that link will be? — Some metasploit exploit, abusing the vulnerabilities on your computer.

Point is, do not share email ids with anyone, do not click any link (even if its from a friend, verify the link by some online checker), change the password every 2 weeks, keep seperate email accounts, and patch your system regularly.

But still, you are insecure.

Adios!

Penetration Testing

What is penetration testing?

Penetration testing is the evaluation of any computer system, whether it be a single device or a group of interconnected nodes, against any potential attacks from inside or outside, breaking the security.

Types of Penetration testing –

  • Password Attack (brute force, cain & abel, ophcrack)
  • Session management holes (cookiedigger)
  • Protocol and config management (SSL, Database, port scanning)
  • Info gathering (social engineering, phishing, fingerprinting)
  • Data validation and testing (cross site scripting, buffer overflow, SQL injection)
  • HTTP-Web monitoring
  • Denial of Service attacks
  • Web testing frameworks (w3af, websecurity)

 

(Post reference – The Open Web Application Security Project)

Configuring Apache with a SSL Connection

You can download the latest version of Apache from here: Apache, and the documentation for installing and configuring the server can be found here: Official Docs

(If you are using BackTrack, Apache will be already installed and configured)

The path of Apache is /etc/apache/

(The Apache version shown here is apache2, it will differ if you have a different version)

Steps:

Create a directory for keeping the SSL certificates and go to the directory

mkdir ssl

cd ssl

Create the server key, using the ‘des3’ algorithm with 1024 bits. You will be asked a passphrase which you need to remember

# openssl genrsa -des3 -out server.key 1024

Create the Certificate signing element by providing the passphrase for the server.key and the Certificate details

openssl req -new -key server.key -out server.csr

Create the Certificate using the X509 authentication standard, for a validity of 365 days

openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

You can check the list of files created by the ‘ls’ command, and to watch the contents of these files by ‘cat’ command

Start the apache server by the following command

/etc/init.d/apache2 start 

Check your server by typing “http://localhost&#8221; in your browser.

Now you need to make changes for including the SSL connection. First go to the directory sites-available

cd sites-available

Modify the file “default-ssl” by replacing the contents of SSLCertificateKey and SSLCertificateFile as shown below:

default-ssl

Modify the file “default” by copying the the Virtual host from above and making the changes as in it as shown:

default

In the folder /etc/apache2/ you need to make changes to the ‘httpd.conf’ file by adding these two lines to the blank file:

httpd

Now provide the command to start the ssl service

a2enmod ssl

Restart the apache service and you will get the service started as shown below:

server start

Congratulations! Your SSL Apache server has started.

Now try to browse your Apache from a remote machine, by typing “http://ip of your server” in its browser.

To check the SSL connection, try ‘https’ instead of ‘http’ before the ip address

At first time, you will get a message that it is an untrusted connection (because it is using a certificate which we have just created, and your will not be having that certificate) Add and exception for the certificate.

untrusted

After you add an exception for the certificate, finally you will get the SSL connection to the Apache server. The SSL connection will work until you have the respective certificate added to your browser.

https