Unix / Linux: Remove duplicate lines from a text file using awk or perl

Datetime:2016-08-23 00:58:43          Topic: AWK  Perl  Unix           Share

I have a text file with exact duplicates of lines. I need to remove all those duplicates lines and preserves the order too on Linux or Unix-like system. How do I delete duplicate lines from a text file?

You can use Perl or awk or Python to delete all duplicate lines from a text file on Linux, OS X, and Unix-like system.

Sample data file

$ <strong>cat data.txt</strong><br /> <kbd>this is a test<br /> Hi, User!<br /> this is a test<br /> this is a line<br /> this is another line<br /> call 911<br /> this vs that<br /> that vs this<br /> How to Call 911<br /> that and that<br /> Hi, User!<br /> this vs that<br /> call 911</kbd>

How to remove duplicate lines inside a text file using awk

The syntax is as follows to preserves the order of the text file:

awk '!seen[$0]++' input > output
 awk '!seen[$0]++' data.txt > output.txt
 more output.txt

Sample outputs:

this is a test
Hi, User!
this is a line
this is another line
call 911
this vs that
that vs this
How to Call 911
that and that

How to remove duplicates line from multiple text file in Perl?

The syntax is:

perl -lne '$seen{$_}++ and next or print;' input > output
perl -lne '$seen{$_}++ and next or print;' data.txt > output.txt
more output.txt

Sample outputs:

this is a test
Hi, User!
this is a line
this is another line
call 911
this vs that
that vs this
How to Call 911
that and that
Share this tutorial on:




About List