pranavk.me2022-08-28T02:25:36+00:00http://pranavk.mePranav Kantpranav913@gmail.comDialog tunneling - Part 22018-04-12T00:00:00+00:00http://pranavk.me/open-source/dialog-tunneling---part-2
<p>This post is in continuation with the dialog tunneling post I talked
about last time <a href="http://pranavk.me/open-source/dialog-tunneling/">here</a>. The last post
talked about what we wanted to achieve and showcased it using a test
tool in C++, gtktiledviewer. The real aim was to integrate
this feature in LibreOffice Online so that people can use the
awesome features that already exist in the LibreOffice desktop
version.</p>
<p>Since last couple of months, we have been polishing it to look nicer,
kill minor inconsistencies left & right, and worked on tunneling the
modal dialogs as well where we faced interesting problems to deal
because of the collaborative editing in Online.</p>
<p>From the implementation point of view, the API is also now more
generic and
<a href="https://gerrit.libreoffice.org/gitweb?p=core.git;a=commit;h=b5e27fd809845577a90cc1811de062c070110078">simplified</a>. We
earlier used to have different LOK callbacks to notify about the
dialog invalidation and other controls like combo boxes, color picker,
etc. Now we have a single callback that handles all of these. This is
possible because now we assign unique window ID to each of these
entities (the main dialog frame as well as their child controls). So
all of them can talk to Online independently. The client (Online) of
course have to manage the parent-child relationships on its own (which
is indicated only when the windows are first created - emit the
‘created’ callback). Since we were also interested in tunneling other things like
autofilter menus, spelling suggestion context menus, etc., this
simplification of API helped immensely in achieving that goal.</p>
<p>Tunneling modal dialogs caused some grief when we started to test them
with collaborative editing. When we were launching a modal dialog in Online,
the dialog would start its own main-loop inside an already running
application-level main-loop. This is okay. But when the other user
in the same document opens same dialog (collaboratively edit it), this dialog
would launch its own main-loop on the same stack. So now the
first user cannot close its dialog because its main-loop is one level
up. Here’s the illustration of what I want to say, if the text above
was confusing:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
ExecuteDialog2-loop()
...
ExecuteDialog1-loop()
...
Application-level main-loop()
</code></pre></div></div>
<p>How do you close the Dialog1 without closing the Dialog2? Just don’t let dialogs have their own main-loop. Rather execute them
asynchronously. Show them on the screen and tunnel them to
Online. <a href="http://holesovsky.blogspot.in/">Kendy</a> did a great job here
in converting the dialogs to execute asynchronously
along with necessary infrastructure changes in dialog execution code.</p>
<p>Moving the dialogs to execute asynchronously helped solving another
problem - whole Online getting freezed with a launch of the dialog. Online has a single thread
to interact with LibreOffice core; so, when we launch a modal dialog
from Online, the thread will block and wait for the dialog execution
to finish (closing of dialog). And this would freeze Online because
now your key and mouse events wouldn’t go to core and won’t be
processed, not because core cannot process it but because the thread
that is responsbile for passing those events to core is blocked.</p>
<p>We have been continously polishing this feature to improve the user
experience for our Online users. Michael and Kendy localized the
dialogs and did some theming work on tunneled dialogs so that they suit our
Online theme better.</p>
<p>It was an interesting journey overall. You can
find all the patches
<a href="https://gerrit.libreoffice.org/gitweb?p=core.git&a=search&h=HEAD&st=commit&s=lokdialog">here</a>. Big
thanks to Collabora for sponsoring this work and all the team members
involved. You can find all the improvements talked above in the latest <a href="https://www.collaboraoffice.com/code/">CODE</a>.</p>
Dialog Tunnelling2017-11-29T00:00:00+00:00http://pranavk.me/open-source/dialog-tunneling
<p>So I’m finally resurrecting this blog to life after a long time.</p>
<p>I’m simply going to talk about what I’ve been currently working on in Collabora Online or LibreOffice Online, as part of my job at Collabora.</p>
<p>In our quest to bring more features available to our users editing documents in the browser, we are attacking something that contains the majority of the features in LibreOffice – the dialogs. One of the complaints that power users make in Online is that it lacks advanced features: they cannot add coloured borders in their paragraphs, manage tracked changes/comments, correct the spelling and grammar in the document, etc. The question before us is how do we bring these functionalities to the cloud at your disposal in your browser tab?</p>
<p>We really don’t want to write another million lines of code in Javascript to make them available in your browser and then dealing with separate set of bugs for time to come.</p>
<p>So we decided to come up with a plan to just tunnel all the hard work that developers have done for the past couple of decades: come up with appropriate infrastructure to open the dialog in headless mode, paint them as a bitmap in the backend, and tunnel then image to you in the browser. And then add life to them by tunnelling your mouse/key events as well which will invalidate and update the new image you are seeing the browser. Don’t worry; we are not sending the whole dialog image back to your browser every time. Only the part that needs updating in the dialog is sent back to the browser saving us precious time and network bandwidth improving your UX.</p>
<p>The current state of the project looks really promising. Not just the modeless dialogs, we are able to tunnel the modal ones as well which is not something we had expected earlier.</p>
<p>Since text is boring, here’s a preview that shows dialog tunnelling in action in our test tools, GtkTiledViewer. The integration with Online is ready too and undergoing some final polishing. But it’s not something you’d have to wait for too long; we are polishing a <a href="https://gerrit.libreoffice.org/gitweb?p=core.git;a=shortlog;h=refs/heads/feature/lok_dialog2">big refactor</a> to LibreOffice core master to install the dialog infrastructure needed for integration. Now you will be able to do pretty much all the things in Online (and in <a href="https://www.collaboraoffice.com/code/">CODE</a> version 3.0 soon to be released) that you’ve always wanted to do.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/AHETaLkhftg?rel=0" frameborder="0" allowfullscreen=""></iframe>
<p><a href="/documents/native_dialogs.pdf">Here</a> are the slides from the talk I delivered on the same topic in our annual LibreOffice Conference in Rome this year.</p>
Fedora @ GNOME.Asia2016-04-24T00:00:00+00:00http://pranavk.me/open-source/fedora--gnomeasia
<p>I attended GNOME.Asia 2016 last week held in New Delhi from 22 April - 23
April. It was held in a university, MRIU, Faridabad, and we thought it would be good opportunity to
spread awareness about Fedora among students and faculty, so we organized a
Fedora booth there.
It was my first time hosting any booth, and it was more fun that I had thought
it would be. There were a rush of students curious about knowing fedora,
and we happily spread awareness about Linux/Fedora.</p>
<p><img src="/images/fedora-gnome-asia.jpg" /></p>
<p>Many students asked about ISO images and we gave them away. Some students had
few trouble/doubts installing Fedora, or linux in general, and we helped them
kickstart their fedora sessions.</p>
<p>There were interesting discussions also with faculty who welcomed us and wanted
us to have an introductory sessions on Fedora from next academic session.
They were also interested in installing Fedora on all their systems in
laboratories. We felt pleased to have got such a positive response from all the
people in the university and appreciate their curiousity in knowing more and
more about Fedora.</p>
<p>Besides university students and faculty, we also interacted with people from PyDelhi, local
python group, and discussed the possibility of organizing something related to Fedora in collaboration
with PyDelhi group in Delhi region.</p>
<p>We have taken a note of challenges that Fedora faces. For example, not being known
among students and faculty as compared to other distributions like Ubuntu. I hope with
our sessions planned, both in university and outside in Delhi, thanks to various people
we met in booth, we would be able to make some progress in right direction in this region. So, overall
I think booth was a huge success able to accomplish everything we had in mind.</p>
<p>I would like to thank all the people involved, pjp, pravins, prth who helped
attending people in the booth and make it a success in the end. It would not
have been possible without their help to make sure that we attend to all the people
and their queries.</p>
<p>Lastly, thanks to Fedora to sponsoring me to be able to get to GNOME.Asia, and my employer,
Collabora to allow me to attend the event.
And also the university and GNOME organizing committee to allow Fedora booth there.</p>
Update on Libreoffice and GNOME integration2016-01-23T00:00:00+00:00http://pranavk.me/open-source/update-on-libreoffice-and-gnome-integration
<p>It’s been a long time I have talked about the <a href="http://pranavk.github.io/open-source/integrate-los-tiled-rendering-in-gnome-documents/">project</a> that I started with GSoC
2015 some time back. We reached at pretty much <a href="http://pranavk.github.io/open-source/gsoc-summer-wrapup-report/">exciting
results</a> by the
end of the summer where we could see the integration working pretty well with
LibreOffice. We finished and merged all the
major work on the Libreoffice side alongwith just-made-it-work integration
with gnome-documents. Things were still in the
development stage for gnome-documents, and we needed good amount of effort to
get it merged upstream.</p>
<p>Things moved pretty slow on the integration because from time to time during the integration I would
realise that I had missed something in LO, and I need to fix it before I
could move forward with the integration. I would then jump back to LO, fix it,
and switch back to gnome-documents, and so on. But it was only till Dec 2015
that things were slow. I suspect
<a href="https://wiki.gnome.org/Hackfests/ContentApps2015">this</a> thing to be the turning
point. <a href="http://www.hadess.net/">Bastien</a>, [Debarshi]
(https://debarshiray.wordpress.com/), and
<a href="https://blogs.gnome.org/cosimoc/">Cosimo</a> held the string from
gnome-documents side, and started working with my earlier WIP work/patches for
gnome-documents. There were many issues that needed to be fixed for the proper
ready-to-merge LO
integration, and better user experience. But I was lucky now that I had the
experts to take care of it.</p>
<p>As I expected, there were still some minor fixes to be done on
LO side which they found out during the integration. Bastien would report LO
related bugs;
I would fix them in LO; <a href="https://davetardon.wordpress.com/">David</a> would help
with the reviews, and build and ship the package for us. As I write this, I am glad
and feeling proud that we are now there with most of the <a href="https://bugzilla.gnome.org/show_bug.cgi?id=753686">work</a> already
merged upstream in gnome-documents. With LO 5.1 release just few weeks away, gnome-documents seems to be all set to
integrate LO with its next major release, 3.20.</p>
<p>Here is the screencast I made running gnome-documents master with <a href="http://koji.fedoraproject.org/koji/buildinfo?buildID=711257">LO 5.1.0 rc2
koji build on
fedora</a>. There are
bug fixes that couldn’t make to LO 5.1, and some more that you will uncover
as you use this. :) So, it might take little more time (one more release, maybe)
while it settles down. Note that LibreOfficeKit API (to expose LO functionality) is still unstable, but the widget is quite
usable in the view-only mode, and that is how we have integrated it in
gnome-documents for now.</p>
<p>Screencast is here :</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/gDLkVUjGChg" frameborder="0" allowfullscreen=""></iframe>
<p>Earlier, gnome-documents was converting these non-supported formats into PDFs
using unreliable command <code class="language-plaintext highlighter-rouge">unoconv</code>, which sometimes would not give good results
especially with spreadsheets where it would scramble things up during the
conversion. With the use of this new widget, now, you would see the documents as
they exactly are, unless you do not have LibreOffice installed on your box in
which case it would pop up an error asking you to install LibreOffice.</p>
<p>If you want to try it out yourself, you need to build atleast LibreOffice 5.1.0
rc2 and gnome-documents master branch. If you are on fedora, you can use this
<a href="http://koji.fedoraproject.org/koji/buildinfo?buildID=711257">koji build</a> to
install LO 5.1.0 rc2 on your box, and you can use jhbuild to build
gnome-documents from master. Running gnome-documents now should automatically
make it to use LibreOffice in the background.</p>
<p>I would like to thank all the people, aforementioned, Bastien, Debarshi, and
Cosimo to finish the GNOME integration with LibreOffice. On the LibreOffice
side, <a href="http://vmiklos.hu/blog/">Miklos</a> and
<a href="https://people.gnome.org/~michael/">Michael</a> with whom I started this GSoC
project, and David Tardon who has helped us ship the LO package with much needed
fixes in time.</p>
<p>While I am at it, I also want to announce about my new job at Collabora
Productivity where I would be hacking on LibreOffice full-time. It would be an
exciting and wonderful learning experience for me. I am greatly looking forward
to it.</p>
LibOCon 2015 -- Aarhus2015-09-28T00:00:00+00:00http://pranavk.me/open-source/libocon-2015---aarhus
<p>I spent my last week with the LibreOffice community talking, hacking and
altogether enjoying a lot with them. It was my first LibreOffice conference, and I
have brought home a lot of learning this time. I relish each moment that I spent amongst
such awesome people around me.</p>
<p>The conference was very well organized in the beautiful city of Aarhus, in
Denmark (little bit colder than I had anticipated). As they say, more than a
city, Aarhus was a feeling. The local organizing team
left no stone unturned to take care of the participants, and everything went
smoothly. I also really liked the newly constructed venue where the event was organized.</p>
<p>More than anything else, it was pleasure meeting my GSoC mentors, Michael Meeks
and Miklos Vajna, in person, and
presenting my work before LibreOffice community. Hacking together with them
was a wonderful learning experience for me, especially hacking in the train with
Miklos, and trying to fix some bugs. It was also a good experience meeting faces
behind IRC nicks, talking to them and sharing ideas, and grow your social
skills, altogether (I hope I did) – :).</p>
<p>At last, I would like to thank The Document Foundation for sponsoring this
conference for me, without which it would not have been possible for me to come
that far and gain this experience. Aforementioned, attending this conference
was a good learning experience which has motivated me to excel as a programmer,
and a will to keep this relationship strong with LibreOffice in future.</p>
GSoC Wrapup Report2015-08-21T00:00:00+00:00http://pranavk.me/open-source/gsoc-summer-wrapup-report
<p>This is a GSoC wrap up report of everything I did this summer. Though, my official
GSoC organization was LibreOffice, the work also involved GNOME directly, and Mozilla
indirectly.
Some of you, who are already familiar with my work might find this post
repetitive since I would be repeating many of the things here that I have
already talked about in my earlier posts.</p>
<h2 id="initial-state-of-the-widget">Initial state of the widget</h2>
<p>Before I started working on it as a summer of code project, the widget:</p>
<ul>
<li>used to render all tiles in the loaded document</li>
<li>used to make calls to the LibreOffice core in the same UI thread</li>
<li>was not introspectable</li>
<li>was gtk2</li>
<li>and needed such minor fixes here and there</li>
</ul>
<h2 id="tiled-rendering">Tiled Rendering</h2>
<p>I started with implementing tiled rendering in the widget. First plan was to
reuse the Mozilla tiled rendering code. After analyzing this and discussing it, we scraped
that plan as it was quite infeasible. Better approach was to understand how tiled
rendering is implemented and write our own class that handles the tiles for
you. We took ideas from Mozilla tiled rendering code, and GeglBuffer.</p>
<p>We started with a small TileBuffer class, and it gradually improved as
widget started to demand more from it. There is still some scope of
improvement in TileBuffer class which is already on my radar. At the moment it exposes enough API for the
widget to work smoothly.</p>
<h2 id="only-render-visible-tiles">Only render visible tiles</h2>
<p>Aforementioned, the widget used to render all of the tiles even if they are not
visible on the screen. This was a huge bottleneck to the widget
performance. Why render millions of tiles in a large document if most of them are
not even visible to the user?</p>
<p>We changed this to demand-based model. Only render tiles when there is a demand
for them, that is, when the user wants to see them. Also keep caching these tiles
with the help of TileBuffer so that next time user asks for these tiles, it
doesn’t have to render the tile again.</p>
<h2 id="free-the-main-thread">Free the main thread</h2>
<p>All the LibreOffice operations, like, rendering new tiles, selecting a
paragraph, bold the selected text etc. is a lot of work and if done in the main
thread would not give a very smooth user experience. So, we decided to move all
such heavy operations (all libreoffice calls) to a new dedicated worker thread
whose job is to perform the libreoffice calls and return the result to the main
thread. As a further optimization, we used a thread pool with a single thread so
that we don’t have to create a new thread everytime. It is to be noted here that
we used a thread pool of single thread because libreoffice is almost
single-threaded, and it would be useless to create multiple worker threads
calling to the same libreoffice core instance. Further, thread pool with a single
thread would automatically queue the LibreOffice operations for us, and execute
them when thread becomes free.</p>
<p>As of this writing, we have moved all LibreOffice calls but one to the worker
thread. This has signficantly improved the widget performance.</p>
<h2 id="gobject-friendlyintrospectable-widget">GObject friendly/introspectable widget</h2>
<p>One of the aim of this project was to integrate it with
gnome-documents. GNOME Document is written in javascript, and the widget is in
C++. GObject Introspection came to our rescue here. We started making this
widget more GObject friendly, added necessary comments/annotations, and finally
we were able to use this widget from any of the language bindings including
javascript.</p>
<h2 id="gtk3-port">Gtk3 port</h2>
<p>Since we wanted to integrate the widget with gnome-documents (gtk3), we modified
the widget to use gtk3. This also gave a new look and feel to the widget.</p>
<h2 id="ship-introspection-files-with-libreoffice">Ship introspection files with LibreOffice</h2>
<p>Installing the introspection files (.gir/.typelib) into their standard location
on user’s computer is something that doesn’t fit well into the LibreOffice
installation model. The current plan is to let the distributions execute a
script (create_tree.sh) provided by LibreOffice that would generate and install
the introspection files (.gir/.typelib) into their standard location. You would
most likely see this script being used by distributions with LO 5.1</p>
<h2 id="integrating-it-with-gnome-documents">Integrating it with GNOME Documents</h2>
<p>For few days, I used a sample javascript application that uses this widget to
show and edit documents. This was useful for debugging. Gradually, I started to
write code for gnome-documents to use this widget. I had to change the
integration model a few times. Finally, I ended up writing a new class to handle
the libreoffice widget, and the integration works quite well now.</p>
<p>If you are interested in trying this widget out, you can checkout my
gnome-documents feature branch
<a href="https://git.gnome.org/browse/gnome-documents/log/?h=wip/pranavk/lokdocview">here</a>.
However to make this work, you have to generate the introspection files manually. I have
created a (wiki
page)[https://wiki.documentfoundation.org/Development/Integrating_LOKDocView_and_GNOME_Documents]
that should help you in this regard.</p>
<h2 id="eta-in-gnome-documents">ETA in GNOME Documents</h2>
<p>All the work on the widget is already in LO master which will become LibreOffice
5.1 to be released around January 2016. This means we still got some time to
make improvements in widget, if any, till LO 5.1 freeze in November 2015. My hope is that we should be
able to see the widget integrated in gnome-documents in 3.20, if everything goes well.</p>
<p>Here is the screencast I made where gnome-documents is using the libreoffice
widget to show open documents, while still using evince view to show the pdf
documents.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/bxh4k0rFMc0" frameborder="0" allowfullscreen=""></iframe>
<p>Overall, this was the best project that I ever worked on till now. I learnt a
lot this summer. I would like to thank my GSoC mentors - Michael Meeks,
Miklos Vajna for their continous support and ideas throughout the
summer. I would also like to thank Debarshi Ray and Cosimo Cecchi for their
invaluable suggestions that helped me finally integrate the widget with GNOME
Documents. Its still not in master, but it works well, and I hope to see it soon
in master. Also thanks to Mozilla community for helping me with tiled rendering concepts.</p>
GUADEC 2015 in Gothenburg2015-08-10T00:00:00+00:00http://pranavk.me/open-source/guadec-2015-in-gothenburg
<p>This year GUADEC was organized in the wonderful city of Gothenburg in Sweden. It
was my pleasure attending this conference for the second time, and meet several
awesome contributors in FOSS community again. Discussions with several contributors
helped me get a wonderful insight of various important topics in both GNOME,
as well as the LibreOffice project.</p>
<p>As a Summer of Code student this time
for LibreOffice, I also gave a talk about my (project)[http://pranavk.github.io/open-source/initial-preview-of-libreoffice-integration-with-gnome-documents/] with <a href="https://mmohrhard.wordpress.com/">Markus
Mohrhard</a>. You can download the slides
(here)[https://mmohrhard.files.wordpress.com/2015/08/integrating-libreoffice-with-gnome-documents.pdf].</p>
<p>I would like to thank whole of the GUADEC organizing team for organizing this event
smoothly. Last but not the least, a many thanks to TDF for sponsoring my travel
and accomodation without which all of this would not have been possible. I am
looking forward to have a long and exciting journey ahead with GNOME and Document Foundation.</p>
FUDCon Pune 20152015-07-03T00:00:00+00:00http://pranavk.me/open-source/fudcon-pune-2015
<p>This year FUDCon, held at Pune, last week was my first ever FUDCon, and my first steps in to awesome Fedora community. This was also the first conference
where I delivered a full-fledged talk about ‘Automating UI testing’ presenting some of the work I did in automating the UI tests for gnome-photos. The talk was more about how they can make their UI tests automated.</p>
<p>I also talked about ‘Integrating LibreOffice with your applications’ in a barcamp talk sharing and discussing ideas with few people, presenting <a href="http://pranavk.github.io/open-source/initial-preview-of-libreoffice-integration-with-gnome-documents/">what I am up</a>
to in this project in LibreOffice, and how they can take advantage by either directly using the new, evolving LibreOfficeKit API, or by using the new Gtk3 widget in their applications. I talked about how I am acheiving this using tiled rendering, and how
I (with Michael and Miklos) am planning to enhance this in future by incoporating the support for opengl, efficient tile management, and multi-threaded support.</p>
<p>Besides that, it was a wonderful opportunity for me to meet new people contributing to Fedora project, and sharing ideas with them. I now have a better idea of how I can contribute more to Fedora, and feel motivated enough to continue my contributions. I have made
quite a few friends who, I think, would be happy to help me if I plan to get started with any of the Fedora teams, and I do plan to involve myself in few more interesting teams in future sparing time out of my regular work.</p>
<p>Last but not the least, I would like to thank all the organizers for making this event possible. They have been working hard for months, and have had many sleepless nights just to make sure everything remains on track. I would also like to thank them for sponsoring
my stay and travel, without which I would not have been able to attend the event.</p>
Initial preview of LibreOffice integration with gnome documents2015-06-22T00:00:00+00:00http://pranavk.me/open-source/initial-preview-of-libreoffice-integration-with-gnome-documents
<p>I managed to integrate LibreOfficeKit’s LOKDocView widget with gnome-documents,
finally. Here is the <a href="https://youtu.be/NdSbqMvLYt4">screencast</a> for the same.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/NdSbqMvLYt4" frameborder="0" allowfullscreen=""></iframe>
<p>There are still lot of improvements that I need, for example, we need to have a
new editing overlay now in gnome-documents so that you you can do operations
like bold, italics, underline, search, switch to edit mode and view mode, and
few other things. There are also crashes right now which possibly are because I
haven’t yet written robust code to nicely separate the currently used EvinceView and
the newly used LOKDocView.</p>
<p>On the other hand, I have few ideas to improve the widget backend, for example,
improving the tile buffer backend by rendering nearby tiles to increase the visual
coherence. Right now, it only renders the visible tiles, but it would be good to
render the tiles nearby the boundary of the visual region so that the scrolling
is smooth.</p>
<p>I would be working on improving upon this in next coming weeks.</p>
Introspecting LOKDocView, the LibreOffice widget2015-06-15T00:00:00+00:00http://pranavk.me/open-source/introspecting-lokdocview-the-libreoffice-widget
<p>This is in
<a href="http://pranavk.github.io/open-source/integrate-los-tiled-rendering-in-gnome-documents/">continuation</a>
of my work under LibreOffice. For the past few days, I have been working on
restructuring the widget, LOKDocView, to make it
introspectable. I also ported the widget to gtk3 from gtk2, so applications can
now start thinking of using it.</p>
<p>To test its introspectability, I wrote a simple <a href="https://github.com/pranavk/lokdocviewer">test application</a>
in javascript making use of this widget. Here is the small
<a href="https://youtu.be/k7s7tfmQFTw">screencast</a> I made using the widget from javascript.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/k7s7tfmQFTw" frameborder="0" allowfullscreen=""></iframe>
<p>The widget still needs more
polishing to provide a sane, minimal, still useful, API to consumers and to hide the
still unstable LibreOfficeKit API. So, we would be improving that in coming weeks.
We also plan to try GtkGLArea instead of currently used GtkDrawingArea for the widget to
enable openGL while rendering tiles, and hence increase the performance. The
backend currently use tilebuffer, taking few ideas from gegl-buffer, and
Mozilla’s tiled buffering logic. I also hope to make further improvement in this
backend to increase the widget’s performance making use of efficient algorithms.</p>
<p>Here is the <a href="http://cgit.freedesktop.org/libreoffice/core/log/?h=feature/gsoc-tiled-rendering">feature
branch</a>
for LOKDocView.</p>
Integrate Libreoffice with gnome-documents2015-06-05T00:00:00+00:00http://pranavk.me/open-source/integrate-los-tiled-rendering-in-gnome-documents
<p>This year I am working on integrating Libreoffice with
gnome-documents. gnome-documents currently only supports viewing documents. It
indirectly makes use of poppler library to render PDF documents. To show any
other format such as .docx, .odt, .ods, it first converts the document into a
PDF format using unreliable <code class="language-plaintext highlighter-rouge">unoconv</code> command, and then render these PDFs using
poppler. Hence, this also disables gnome-documents from editing editable formats.</p>
<p>As part of my GSoC 2015 under Libreoffice, my main aim would be to improve an
existing widget under Libreoffice, LOKDocView, and integrate the widget with
gnome-documents. LOKDocView make calls to Libreoffice core using
LibreOfficeKit. The current LOKDocView implementation needs a little furnishing.
It also doesn’t
support efficient tiled rendering, which is very essential especially at
larger zoom levels (You never want the application to render zillions of tiles
for you at 500x even if you don’t want to see majority of them). One of the improvements that I would be making in this widget is efficient
tiled rendering so that it only renders the visible part, and while scrolling
tries to reuse the already renders tiles to the best of its ability.</p>
<p>To improve this tiled rendering, I have modfied the widget to make use of a tile
buffer taking ideas from Mozilla source code, and how it manages to do the tile
buffering. The current implementation of the tile buffer I created acts simply
as a cache returning already rendered tile instead of giving a new render tile
command to the LO core. This helps in easy navigation of documents. You can find
more about this on my <a href="https://github.com/pranavk/core/commits/feature/gsoc-tiled-rendering">feature
branch</a>.</p>
<p>To further improve the tiled rendering support, we are also trying to find ways
of rendering the tiles on GPU, rather than on the
CPU. <a href="https://developer.gnome.org/gtk3/stable/GtkGLArea.html">GtkGLArea</a> seems
to be the good choice here, but is quite new, and most importantly Libreoffice
is still at Gtk2 and it might take some time for it to migrate to Gtk3.</p>
<p>Other than that, to make the rendering even more efficient, we can employ
techniques like
<a href="https://en.wikipedia.org/wiki/Multiple_buffering">double-buffering</a> which I
would be analysing the feasiblity of, and then implementing them in the widget.</p>
<p>But this widget is still not a replacement for existing
<a href="http://poppler.freedesktop.org/">poppler</a> library. The Libreoffice can render
other document formats such as odt, doc, docx etc., but it gives a terrible performance at rendering
PDFs. A better approach would be to use both in gnome-documents, but only
poppler for rendering PDF.</p>
<p>Please feel free to comment, if you have any idea/optimization regarding this.</p>
Analysing ssh traffic usage per user2015-04-19T00:00:00+00:00http://pranavk.me/linux/analysing-ssh-traffic-usage-per-user
<p>I often use my computer as a router. My friends would login to my system via
ssh, and use the internet. Sometimes, its the opposite, that is, I login to
their system to access the internet. But most of the time, its me who act as a
router for others, as a internet gateway.</p>
<h2 id="the-problem">The problem</h2>
<p>I only give ssh account to few of my close friends. I expect them to use
internet only for browsing and not for downloading heavy stuff. But sometimes
they would download heavy stuff and that would drastically affect my internet
experience. I don’t want to cut everyone’s ssh access. It would have been great
if I could somehow know which user is eating up my bandwidth, and then warn or
deny his ssh access.</p>
<h2 id="solution">Solution</h2>
<p>I googled about if there is any existing tool that would suffice my
requirements. I found few interesting tools like
<a href="http://www.ex-parrot.com/pdw/iftop/">iftop</a> but the problem with most of them
was that they don’t map users with the session. They could only tell me the traffic
mapping to any particular ip addresses. Then, I came across this <a href="https://newspaint.wordpress.com/2011/08/02/ssh-traffic-accounting-on-linux/">blog
post</a>
by someone which made use of iptables to log the traffic. Again, without
tweaking or writing scripts, it is not possible with iptables to map users
against their respective traffic. The script in above mentioned blog post would
do the necessary tweaking and map the users with their corresponding ip
addresses. Thats really a great way to identify who is eating up your
bandwidth. But at the same time, it is also very time consuming, as too much
manual introspection is involved in this. One has to login to the server
everytime and run <code class="language-plaintext highlighter-rouge">iptables</code> to see the total traffic usage by any
user. Also, I wanted this information to be available to all my ssh users, so
that they know how much bandwidth they unknowingly might be consuming.</p>
<p>To solve this, the guy in above mentioned post has also provided with another
script that would take the data from iptables per minute, per hour and then feed
this data into a database.</p>
<p>For better analyzing the data, I googled and ended up using
<a href="http://highcharts.com/">HighCharts</a>. It helped me convert that data into
graphs which I could now also show to my ssh
users. HighCharts is a wonderful javascript library for creating charts and
maps.
<a href="https://gist.github.com/pranavk/de5013d779431dbc0058">Here</a>
is the little hack that I ended up writing to suffice my requirement. This queries the database, and uses
the information to draw nice charts for ssh users according to their usage of
internet. I can now also make these charts public to my users.</p>
<p>To automatically deny their ssh access, I also wrote few bunch of scripts which
checks the last hour traffic usage from the database, and adds an entry into
<code class="language-plaintext highlighter-rouge">/etc/ssh/sshd_config</code> under <code class="language-plaintext highlighter-rouge">DenyUsers</code> to deny the ssh access to the user and
then enable the access after few specified hours.</p>
<p>Overall, interesting last few hours spent researching and hacking with last few minutes writing up this
blog post.</p>
Goodbye proxychains !2015-02-18T00:00:00+00:00http://pranavk.me/linux/goodbye-proxychains-
<p>I have this weird, not so straight mechanism to use Internet. I have a server in
the Internet/Cloud that I am almost always logged in. I use this server to
create a SOCKS proxy on my local and then use Internet via this SOCKS proxy.</p>
<p>And then I would use proxychains or proxychains-ng or tsocks with each of the program I
want to use Internet for. This is a bit messy setup but somehow I make
everything work by using these combination of tools. These tools help resolve
the DNS requests made by the program on the remote because my local DNS server
doesn’t resolve DNS requests pertaining to Internet.</p>
<p>Recently I found a wonderful tool i.e <a href="http://github.com/apenwarr/sshuttle">sshuttle</a>, this tool would dynamically
change your iptables and would make your computer access Internet if you have a
setup like me, i.e you have a server in the Internet and you want to use
Internet via this server on your local computer. You would then set your
computer to use <code class="language-plaintext highlighter-rouge">no proxy</code> and it works because it has changed your iptables,
all your packets, DNS requests are being routed through the server and your
application doesn’t even know about what’s going on in the background. They just
think you are in a <code class="language-plaintext highlighter-rouge">no proxy</code> environment. This has really made my life very
simple and removed my hard dependency on proxychains/tsocks.</p>
<h2 id="installing-and-usage">Installing and Usage</h2>
<p>I found sshuttle in my distribution repositories (Fedora), so just an yum
install sshuttle finished the install for me. If you use any other distribution,
it should most probably be in your repositories but even if it is not present you
can go to <a href="http://github.com/apenwarr/sshuttle">sshuttle</a> website and download
it from the github website.</p>
<p>Then a simple command like following should do the trick for you :</p>
<p><code class="language-plaintext highlighter-rouge">sshuttle --dns -vvr username@host 0/0</code></p>
<p>If you are curious about these flags, you can read a much more detailed
information about using these flags and many others on sshuttle website.</p>
Transforming Control flow to Data flow2015-02-13T00:00:00+00:00http://pranavk.me/architecture/transforming-control-flow-to-data-flow
<p>This is in continuation to my earlier introductory post on <a href="http://pranavk.github.io/architecture/spatial-computing">Spatial
Computing</a> project I have been
working on.</p>
<p>I spent my last few days transforming the traditional control flow of the
programs into a data flow graph having producer consumer relationship between
the instructions. As mentioned in my previous post, this data flow when executed
on a data flow architecture that we have been trying to build for ASICs and
then later for general purpose computers would be using the highest level of
parallelism available because of the producer-consumer relationship between the
instructions.</p>
<p>There are some programming constructs which are hard to transform from
sequential to data flow. Loops, pointers, pointers to functions are some of them
that needs extensive care. This post is more about the results from my
project. I will be showing the transformation for single loop just for the sake
of simplicity :</p>
<p><b>Sample Code</b></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> int sum_single_loop (int a){
int sum = 0;
for (int i = 0; i < a; i++)
sum = sum + i;
}
</code></pre></div></div>
<p>I am making use of LLVM IR format, so I convert all the code in imperative
languages which are having their LLVM front-end avilable, to LLVM IR format. This would
help me deal with a lot of languages. We have <code class="language-plaintext highlighter-rouge">clang</code> that can emit LLVM IR for
any given C code. So I use clang to emit LLVM and then run the LLVM passes I
have written for transforming this control flow graph to data flow graph. The
passes use CFG (Control Flow Graph) as a reference, they take the necessary
information from the CFG and create a DFG out of it while keeping the
correctness of the program intact. After all, that’s the most important thing we
don’t want to let happen before performance.</p>
<p>Here are the corresponding CFG (generated using clang and opt - LLVM) and
transformed DFG below that (transformed using LLVM passes) for above code.</p>
<p><img src="/images/cfgSingle.png" /></p>
<p>Above graph consists of 5 basic blocks with their labels on their top.</p>
<p>You should be able to understand all the LLVM instructions if you are
familiar with
assembly level programming except the PHI instructions which takes a set of
array (two in this case) and assigns the first argument of these arrays, to the
variable on left side. Since there are two such values in this case, it assigns
the value as soon as it receives any one of it. The second argument of these
arrays tells the name of the basic block from where this first argument will
come from. The constant <code class="language-plaintext highlighter-rouge">0</code> in PHI instruction is there to trigger the
program from outside.</p>
<p><img src="/images/dfgSingle.png" /></p>
<p>Here is the transformed DFG using CFG above. You can see I have used <code class="language-plaintext highlighter-rouge">Steer</code>
nodes to eliminate the branch instructions. The steer nodes as the name suggests
sends the value coming from its top to its left if the select pin (coming in to its
right) is true and to its right if its false. In other words it works as a
demultiplexer. You can also see that some of the steer nodes only have one
output value to its bottom left but no value on its bottom right. It means that
the value on the right is not needed and hence is sinked.</p>
<p>Also, note that <code class="language-plaintext highlighter-rouge">su.0</code> and <code class="language-plaintext highlighter-rouge">i.0</code> instructions in above DFG are still triggers in
executing this data flow graph as explained above due to PHI instruction.</p>
<p>The LLVM passes I have written are in a very dirty state as of now. I still have
to deal with Load/Store instructions so that I can deal with pointers in
programming languages as they and the operations using them on memory are something very
fundamental if I have to cover up any programming language fully. I also need to
add some mechanism of waves that we have theortically solved to keep the data
flow cycles in sync with each other to maintain the program correctness.</p>
Libreoffice headless comes to the rescue2015-02-03T00:00:00+00:00http://pranavk.me/linux/libreoffice-headless-comes-to-the-rescue
<p>During exam time, I often have to read a lot of .ppt files provided to me by our
instructors. That’s what we are supposed to study so that sylabii can be
revised quickly. I don’t use windows at all and hence have to use libreoffice for
opening these presentation files but libreoffice is quite slow for me as
compared to microsoft powerpoint. I don’t want to install Windows now just
because of this purpose. However, I observed that pdfs can be opened
comparitively faster than these .ppt files. Converting each ppt file to its pdf
equivalent is a cumbersome task.</p>
<p>Last week, I wanted to convert a bulk of .ppt files to their pdf equivalents and
this is where Libreoffice headless came to my rescue, it helped me do a mass
conversion with this following command :</p>
<p><code class="language-plaintext highlighter-rouge">libreoffice --headless --convert-to pdf *.ppt</code></p>
<p>This will convert all the ppt files in current directory to their pdf
equivalents.</p>
<p>If you don’t know what a headless means, it means that you don’t want the whole
GUI thing to woke up and do stuff for you, that increases the performance by
several folds such as in this case. We just want the conversion feature of
libreoffice. You can call other LO features too by passing a help flag and see
what are the functions that LO supports and try calling them headless.</p>
<p>Let me know in comments below. Happy hacking !</p>
Enabling user homedirs on Apache2015-02-03T00:00:00+00:00http://pranavk.me/linux/enabling-user-homedirs-on-apache
<p>In this I will be talking about enabling user home directories for user accounts
on the server. I will talking specifically about CentOS since it is this OS that
I am using on one of the server I am handling.</p>
<p>As usual, we have to tweak little bit with selinux variables here as if your
selinux is on, it will prevent apache from accessing user home directories, but
first of all you need to enable the homedirs in apache configuration
itself. Depending on your OS, apache configuration file name maybe different. In
CentOS, following is the apache configuration file :</p>
<p><code class="language-plaintext highlighter-rouge">/etc/httpd/conf/httpd.conf</code></p>
<p>You need to look for <code class="language-plaintext highlighter-rouge">mod_userdir.c</code> ifmodule block and then comment the line
that says <code class="language-plaintext highlighter-rouge">UserDir disabled</code> and make sure that line <code class="language-plaintext highlighter-rouge">UserDir public_html</code> is
uncommented. You can rename this from public_html to any directory name you like
but then make sure that your users have that same directory name in their home
directory.</p>
<p>You also need to restart the httpd server after this. Here it goes :</p>
<p><code class="language-plaintext highlighter-rouge">sudo service httpd restart</code></p>
<p>After this you need a selinux rule here, you need to allow userdir from
selinux.</p>
<p><code class="language-plaintext highlighter-rouge">sudo setsebool -P httpd_enable_homedirs true</code></p>
<p>And there you go, usernames on the server should be able to access their
public_html directory via their username suffixed to a tilde character.</p>
<h2 id="enabling-directory-listing">Enabling directory listing</h2>
<p>To enable directory listing, you need to again open your
<code class="language-plaintext highlighter-rouge">/etc/httpd/conf/httpd.conf</code> file and uncomment the Directory tag block for
<code class="language-plaintext highlighter-rouge">/home/*/public_html</code>. You can find this block right below the changes you made
above when you enabled user homedirs (read above).</p>
<p>Restart the server and you should now be able to have directory listing enabled.</p>
<p>If you are not able to access something, it is probably because of the
permissions, so double check the permissions before looking for help again.</p>
<p>Oh my ! This sysadmin stuff is addictive. ;)</p>
Setting up a mail server the easy way2015-02-02T00:00:00+00:00http://pranavk.me/linux/setting-up-a-mail-server-the-easy-way
<p>For past few years, I have been managing the whole server at
<a href="http://glug.nith.ac.in">GLUG-NITH</a>. I installed various servers on that machine
since it was the first server I got my hands to. I tried whole lot of stuff
until one day the SAN space crashed and we had to start from scratch again. We
also had a mail server and it was very urgent to up this since leaving the mail
server in that inconsistent state would mean a lot bounced mails.</p>
<p>We had full fledged mail server previous time consisting of postfix and
dovecot. I used a roundcubemail as frontend as it had a nice UI. But there was a
major security concern with this, the sysadmin having root access can always see
the emails of the members who are given the email address associated with the
server. This time, I thought of taking care of this major security concern and
scrap the whole idea of storing mails on the server. Instead, we can just use
postfix to receive the mail and forward it to the alias that the user
provides. Eg: if you are sending email to abc@glug.nith.ac.in, and the alias for
abc is set to jkl@gmail.com, then as soon as someone sends an email to abc, it
will be forwarded to jkl@gmail.com whenever it arrives on the server.</p>
<p>This also means that we don’t have to install dovecot this time, so one more
less software to configure while still providing email functionality to the
users. Postfix use virtual file in /etc/postfix/ to set aliases for various
users. If you want to create someone’s account on the server, you just need to
set the alias in this file and then run :</p>
<p><code class="language-plaintext highlighter-rouge">sudo postmap /etc/postfix/virtual</code></p>
<p>This will create a new binary file virtual.db which will be used as a lookup
table whenever someone sends an email to the server to look if the recipient has
an account on the server or not.</p>
<p>So far, its going good but I think it probably needs more configuration and some
measures to prevent it from detected as spam in major mail service providers.</p>
My research project on Spatial Computing2015-01-07T00:00:00+00:00http://pranavk.me/architecture/spatial-computing
<p>I have been working for past few months on a research project in one of the cores of computer science,
Computer Architecture specifically trying to increase the efficiency of existing
processors in use by several folds. This would be possible because this new architecture is a dataflow
based architecture unlike the control flow architecture that we have been using
since Von Neumann invented it.</p>
<p>This new architecture can theoretically exploit the maximum possible parallelism and
hence making it super efficient. Here,
we are trying to generate a hardware and create a compiler for the programs written in imperative languages
so that they can be executed on this new data flow based hardware. The <a href="http://llvm.org">LLVM</a> compiler
infrastructure suits very well here. It has got frontends for various
programming languages, it then converts the source code of these programming
languages to a special LLVM IR (Intermediate Representation). This saved a lot
of time since we now do not have to deal with so many programming languages and
can work on a single language (LLVM IR) that would indirectly support all other
languages.</p>
<p>We have taken the concept of waves from the well known <a href="http://wavescalar.cs.washington.edu">Wavescalar
Architecture</a>. Since LLVM is a compiler
infrastructure, it provides writing a pass which takes LLVM IR input and do some
operations into it. Implementing this concept of waves from wavescalar
architecture in this LLVM pass helped us to annotate waves in the control flow
graph of any procedure given in imperative language. With the help of these
waves and the control flow graphs, we generate the data flow graph out of
it. This is the main thing that would break the sequentiality embedded in a
control flow programs, it would put in place a producer-consumer relationship
between various instructions and thats all. As soon any instruction produces
soemthing, all the consumer instructions will consume it. We have already got
rid of the sequentiality, no more waiting and hence faster.</p>
<p>Now why do we need waves here ? Well, conversion from control flow to data flow
may look easy and feasible but in reality it breaks alot of things. Loops and
branches are some of the programming constructs that would break this conversion
and output false results in the data flow equivalent of the program. Waves are
the independent units in which each instructions can execute at most once, they
have single entry point and all the instructions in a wave are partially
ordered. This ensures that the correctness of the code doesn’t get affected in
our endeavour to increase the performance. After all, a program which doesn’t
give correct result is of no use.</p>
<p>After we get a data flow graph by using the information from the control flow
graph of any program, we would write a systemC implementation to simulate things
so that we can check its feasibility of a real hardware. Writing a compiler to
generate a data flow binary to be run on the hardware would be next steps.</p>
<p>I am currently starting out with a very basic programming constructs such as
loops and
doesn’t include pointers, runtime bindings etc. Pointers are one of the fundamental
things that we would have to take care of anyways because any major program will
make use of it. <a href="http://github.com/pranavk/spatial-computing">Here</a> are very
basic LLVM passes I have written.</p>
<p>More coming as soon as I get some major work done on this. :)</p>
Create a secure GPG keypair with subkeys2014-08-24T00:00:00+00:00http://pranavk.me/cryptography/create-a-secure-gpg-keypair
<p>Public key cryptography has always fascinated me. I created my keypairs long
ago. They are not used much because there are not much people around me who use
public key cryptography. Very few of them have generated their keypair. Some of
them who have generated do not know about whether they have a keypair. But
still, this week I managed my keypair immuned to any theft of keys. If your
private key is stolen, everything is over, so better not store the main private
key on your laptop or phone but rather in a private safe or in your pendrive
which is then kept in a private safe.</p>
<p>If you have your GPG key already or not, using separate subkeys to sign and
encrypt messages is always a good choice. There are few steps I will talk about
below that will help you detach your private key from your secret keyring in
your laptop and allow you to store it in a safe place. You can buy a separate
pendrive which you don’t use for daily purposes to store your private key.</p>
<p>I am assuming you already have your GPG keypair, if you don’t have it already,
you can use</p>
<p><code class="language-plaintext highlighter-rouge">gpg --gen-key</code></p>
<p>to generate your keypair. I am also assuming that you have created two subkeys
here, one for encryption and one for signing. I won’t go into techincal details here since this post
is not about creating GPG keypair from scratch. You can easily google about how
to create your GPG keypair and how to add subkeys to it. Most of this part is
interactive, so some of you don’t even need to google but can simply understand
what’s written on the console.</p>
<p>So, you have two subkeys now linked to your main keypair, one for encryption and
one for signing. You can verify this by first :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --edit-key <your_key></code></p>
<p>and then with this :</p>
<p><code class="language-plaintext highlighter-rouge">list</code></p>
<p>You can see the usage tag in front of each subkey. ‘E’ stands for encryption and
‘S’ stands for signing. Note that signing here means signing documents, not
keys. Signing keys, revoking keys, adding new subkeys are some of the operations
that require the presence of your main private key. Since these are some of the
operation that are not done on a daily basis, we can remove our private key from
our laptop. But first you should backup all of the private and public keys.</p>
<p>To backup your public key :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --export --armor <your_id> > key.pub</code></p>
<p>To backup your private key :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --export-secret-keys --armor <your_id> > key.priv</code></p>
<p>To backup your secret subkeys :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --export-secret-subkeys <your_id> > subkeys</code></p>
<p>Now you need to delete all the secret subkeys from your keyring like this :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --delete-secret-keys <your_id></code></p>
<p>But note that, this will also delete the subkeys from your keyring but your can
import them again as :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --import subkeys</code></p>
<p>This would import the subkeys part but the main private key is still not
imported. So it means you can encrypt and sign documents without any problem but
you need to import your private key whenever you need to do some bigger tasks
like signing other people keys, adding UIDs etc.</p>
<p>Keep your private key i.e <code class="language-plaintext highlighter-rouge">key.priv</code> created in above steps safe and use it only
when required.</p>
<p>At this point, you can run</p>
<p><code class="language-plaintext highlighter-rouge">gpg -K</code></p>
<p>and you will see a <code class="language-plaintext highlighter-rouge">#</code> sign in front of your main key. This means that the
secret key is not present in your keyring.</p>
<p>When you need your private key, you need to first delete your key as :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --delete-secret-keys <your_id></code></p>
<p>and then import your main private key as :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --import <your_public_key> <your_private_key></code></p>
<p>Now after running :</p>
<p><code class="language-plaintext highlighter-rouge">gpg -K</code></p>
<p>you can see that there is no <code class="language-plaintext highlighter-rouge">#</code> sign, which means private key is available in
your secret keyring. You can follow the above mentioned steps again to delete
the main secret key from your keyring after you have finished performing tasks
that require the use of main private key.</p>
<p>You can either use these subkeys in other devices such as your smartphone to
encrypt and sign documents. You can also create another subkey for your android
phone to perform such tasks (Note that creating a new subkey requires you to
import the main private key first)</p>
Post GUADEC2014-08-12T00:00:00+00:00http://pranavk.me/post-guadec
<p>This is little late to write about GUADEC but I really got sick after I reached
home from Strasbourg. So I have been taking time to recover and now I’ve come up
with this little late blog post about GUADEC.</p>
<p>This GUADEC was my first international conference and it was really a great
experience attending it. It was awesome meeting experts in their fields and actually talking to
them this time in person rather than discussing with them on IRC or over
email. Sitting together with so many new faces, making new friends, long discussions with them
about the technical topics, hacking together on stuff, discussing new upcoming
features in GNOME, having fun together, exploring the beautiful Alsace region
together was truly a wonderful experience.</p>
<p>Talks and keynotes were excellent. It happened with me multiple times that I
wanted to attend both the talks happening in both the rooms at the same time. I
felt like multiplying myself so that I could attend both of them but poor me,
nobody can do that. I am waiting eagerly for the GUADEC videos so that I can
watch ones I really wanted to attend but couldn’t attend.</p>
<p>I also attended some of the BoFs at the end of the conference. I really liked
the idea of people sharing interest in common thing sitting together, discussing
ideas and hacking together. I felt that productivity can drastically increase by
several folds when you are hacking together. You can simply discuss with other
hackers when you are stuck and you are through, that saves you considerable
amount of time rather than exploring the dark corners of the internet and
finding the solution which may take from minutes to sometimes hours alone. There
is also a huge amount of learning involved in such BoFs, you can learn alot within
few hours than you would learn, in maybe a week or so.</p>
<p>Besides all this, the conference seems like a motivating factor for me towards
contributing more to GNOME and FOSS, be an integral part of the such communities
and keep attending such conferences in the future for a high level exposure to
techincal stuff.</p>
<p>I am very thankful to GNOME Foundation for sponsoring my travel and accomodation
and providing me with opportunity to get exposure at such a level. I would also
like to thank the local organizing team especially Alexandre and Nathalie for
making this event successfull. I reached a day before the GUADEC and saw them
working crazily to make it a sucess. Hats off to whole of the local organizing
team for making GUADEC 2014 a sucess.</p>
<p>
<img src="/images/gnome_sponsored.png" />
</p>
Photos: Browsing DLNA Servers - One step closer2014-07-07T00:00:00+00:00http://pranavk.me/open-source/photos-browsing-dlna-servers---one-step-closer
<p>This post is in <a href="http://pranavk.github.io/gsoc.xml">series</a> about my <a href="https://wiki.gnome.org/ThreePointThirteen/Features/BrowseDMSPhotos">GSoC project:
Browsing DLNA Media Servers in Photos</a>.</p>
<p>If you checked out latest GNOME 3.13.3 release, you might have observed that
gnome-online-accounts now has learned to set up access to the media servers in
your local network as mentioned in this <a href="http://blogs.gnome.org/mclasen/2014/06/26/gnome-3-13-3/">blog post by
mclasen</a>. Good to see
one part of the project committed in this release.</p>
<p>I have been working for past few weeks on making this whole setup at least
work and have finally able to make all of it work. I have a working <a href="https://bugzilla.gnome.org/728912">media server miner</a>
that mines the content from added media server accounts in GOA. I have even
taken care of the albums in this regard. Photos from your media servers are not
thrown randomly into the tracker, rather their parent directory information
comes along with photos. This makes it easy to view photos in the application in
more organized way provided you have organized your
photos well in your media server.</p>
<p>Here’s a screenshot that is showing photos in
their albums according to the directory structure of the media server.</p>
<p>
<img src="/images/photos_collection.png" />
</p>
<p>The albums you are seeing in above screenshot are directly taken from the media
server directory information.</p>
<p>I have written a
<a href="https://bugzilla.gnome.org/728913">patch</a> that adds a media-server extension to
gnome-photos. Applying this patch would enable you to see media-server content in your gnome-photos application.</p>
<p>With this, I have been able to connect all the dots but there are still many issues that needs to be considered. Further, the setup
requires testing to find bugs that might have crept in during the coding phase.</p>
<h3 id="performance-issues">Performance issues</h3>
<p>Performance in mining the media server content is one of the main
issues right now. If you followed my
<a href="http://pranavk.github.io/open-source/mediaserver-miner-for-gnome-online-miners/">previous
post</a>
about the media server miner, I mentioned about searching for content in non
searchable devices. I had to adopt a recursive directory by directory approach
to mine contents in non searchable media servers. It currently takes around 6 to
7 seconds to mine photos from my android device (non searchable) having less
than 100 photos currently and about 1.5 seconds on searchable devices (Rygel
serving around 150 photos). I am currently working on adopting approaches that
might serve as a performance boost in mining content especially for non
searchable devices.</p>
<h3 id="design-issues">Design issues</h3>
<p>There are few things that needs to be decided. Do we want to control our media server directly from the application ? For
instance, delete and upload photo from/to media server. Or do we just want to
make media servers read only for content applications ?</p>
<p>If you have any suggestion or query, you are most welcome to leave a comment.</p>
<h3 id="attending-guadec---2014">Attending GUADEC - 2014</h3>
<p>Last but not the least, I would like to thank GNOME Foundation and travel
committee for sponsoring me to attend GUADEC - 2014. Excited to attend my first
conference and meet you all there.</p>
<p>
<img src="/images/gnome_sponsored.png" />
</p>
Disable your NVIDIA card in Linux2014-06-29T00:00:00+00:00http://pranavk.me/linux/disable-your-nvidia-card-in-linux
<p>My NVIDIA graphic card on my linux box had already given me alot of pain. I
couldn’t resist seeing my laptop fan going wild and my laptop going excessively
hot even during normal operation. I knew that all this is due to the NVIDIA
graphic card I have. Since, I do not use my graphic card much, I decided to turn
my graphic card off permanently on my fedora 20 box.</p>
<p><a href="https://github.com/Bumblebee-Project/bbswitch">bbswitch</a> module helped me alot
in accomplishing this. But before using this module I had to disable the inbuilt
nouveau driver for NVIDIA card that is already shipped with the linux kernel for
NVIDIA card. nouveau is an open source graphic card driver for NVIDIA
cards. Disabling it means I need to blacklist it. On my Fedora 20 box, I added
following file in</p>
<p><code class="language-plaintext highlighter-rouge">/etc/modprobe.d/blacklist-nouveau.conf</code></p>
<p>Just add following line in the above mentioned file.</p>
<p><code class="language-plaintext highlighter-rouge">blacklist nouveau</code></p>
<p>Remember to generate initramfs image after doing this, so that your kernel knows
about the change the next time you reboot. You can generate the new initramfs
image as :</p>
<p><code class="language-plaintext highlighter-rouge">dracut -f</code></p>
<p>Reboot your box and your nouveau driver is not loaded this time. You can double
check that using the following command :</p>
<p><code class="language-plaintext highlighter-rouge">lsmod | grep nouveau</code></p>
<p>If you still see nouveau in the output, there is some problem and the driver is
not unloaded successfully. Re-check the instructions above. There shouldn’t be
any problem if you followed them accurately.</p>
<p>The next step is to download the kernel-devel package for your current
kernel. You can do that as :</p>
<p><code class="language-plaintext highlighter-rouge">sudo yum install kernel-devel-$(uname -r)</code></p>
<p>This downloads the necessary kernel files to build modules for this kernel. You
can now download the <a href="https://github.com/Bumblebee-Project/bbswitch">bbswitch
module</a> from github. Download the
zip file. Extract it. cd to the its extracted directory. Hit</p>
<p><code class="language-plaintext highlighter-rouge">make</code></p>
<p>It will successfully make the bbswitch kernel module if kernel-devel packages
are installed for your kernel version.</p>
<p>After this point, you can load the kernel module explicitly using</p>
<p><code class="language-plaintext highlighter-rouge">sudo make load</code></p>
<p>It will load the bbswitch module if nouveau is disabled. You can see the results
using :</p>
<p><code class="language-plaintext highlighter-rouge">dmesg</code></p>
<p>Loading the module doesn’t mean your NVIDIA card is off now. To do that you have
to enter the following :</p>
<p><code class="language-plaintext highlighter-rouge">sudo tee /proc/acpi/bbswitch <<< OFF</code></p>
<p>It will turn off your card. Again to see the resulsts you can run <code class="language-plaintext highlighter-rouge">dmesg</code> and
see the latest output of the command. If there is some problem in disabling the
driver, the output of <code class="language-plaintext highlighter-rouge">dmesg</code> will tell you that.</p>
<h2 id="disabling-card-on-boot">Disabling card on boot</h2>
<p>If you don’t want your NVIDIA card at all, you can turn off your card at every
boot. bbswitch module should be loaded at every reboot for this to happen. To
load the bbswitch module at every reboot, create a file as :</p>
<p><code class="language-plaintext highlighter-rouge">/etc/modules-load.d/bbswitch.conf</code></p>
<p>with the following content</p>
<p><code class="language-plaintext highlighter-rouge">bbswitch</code></p>
<p>Also create another file as :</p>
<p><code class="language-plaintext highlighter-rouge">/etc/modprobe.d/bbswitch.conf</code></p>
<p>with content as :</p>
<p><code class="language-plaintext highlighter-rouge">options bbswitch load_state=0</code></p>
<p>This makes sure that whenever the bbswitch module is loaded, the card is turned
off automatically so you won’t have to edit <code class="language-plaintext highlighter-rouge">/proc/acpi/bbswitch</code> manually.</p>
<p>For above things to work, there is another thing you need to do. modprobe should
be able to find the bbswitch module you just built. By default modprobe looks
for the modules in :</p>
<p><code class="language-plaintext highlighter-rouge">/lib/modules/<kernel version>/kernel/</code></p>
<p>So copy your <code class="language-plaintext highlighter-rouge">bbswitch.ko</code> file to the above mentioned directory. Now to refresh
the database, you need following command :</p>
<p><code class="language-plaintext highlighter-rouge">sudo depmod -ae</code></p>
<p>Now to check if <code class="language-plaintext highlighter-rouge">modprobe</code> finds the bbswitch, you can check this via :</p>
<p><code class="language-plaintext highlighter-rouge">modprobe bbswitch</code></p>
<p>If it finds the bbswitch module, it will output nothing else it will show you
the error saying that it couldn’t find the module bbswitch.</p>
<p>After following all the above instructions, your NVIDIA card will automatically
be disabled on every reboot. There is one thing you need to keep in mind here
that you have to follow all the above instructions in case you upgrade your
kernel version. If you want this to remain persistent across kernel upgrades
you can use the DKMS. To know more about it you can refer to the
<a href="https://github.com/Bumblebee-Project/bbswitch/">bbswitch</a> README.</p>
<p>Happy hacking !</p>
MediaServer miner for gnome online miners2014-06-09T00:00:00+00:00http://pranavk.me/open-source/mediaserver-miner-for-gnome-online-miners
<p>After a working GOA <a href="http://bugzilla.gnome.org/730890">media server provider</a> I had been working on, its the time
to start mining data from the added GOA account. Though media server provides
variety of media types, I am currently only considering browsing photos from the
media server.</p>
<p>Exported object on the D-Bus by the GOA daemon only provides with the
UDN (Unique Device Name) and the
DlnaSupported property of the media server. The DlnaSupported property tells if
the media server is DLNA certified or not. This can be useful if in future we
plan to integrate other types of media servers in GNOME. These two properties are just enough for the
miner to start working. The miner fetches the UDN of the added accounts and see
if the DMS with UDN is currently online or not. Also depending upon the
DlnaSupported property, it will instantiate the server manager accordingly
(Eg. DLNA server manager for DLNA devices, some XXX manager for some XXX type of
media server etc.). All online media servers are
then probed for photos.</p>
<p>The mining task would really have been very easier had
all the media servers provide the searchable property on them. But
unfortunately, not all the media servers are searchable, you can’t search them
by their MIMEType (eg: give me all photos). With all the current DMSes I am
playing out with while <a href="https://wiki.gnome.org/Rygel">Rygel</a> supports searching,
there are quite few like my android device which prohibits searching at all. For
DMSes that do not allow searching for the content, the miner needs to
recursively check data in each of the container (directory) for photos.</p>
<p>At the time of writing this post, I have successfully implemented a basic mining
for photos on devices that are searchable. You can have a look at the work
attached to <a href="http://bugzilla.gnome.org/728912">Bug 728912</a>. I am still finding
my way out for efficiently searching for photos in DMSes that do not provide any
such searchable property at all or that prohibits searching via MIMEType.</p>
MediaServer Provider in gnome online accounts2014-06-01T00:00:00+00:00http://pranavk.me/open-source/mediaserver-provider-in-gnome-online-accounts
<p>After discussion with gnome-design team about my GSoC <a href="https://wiki.gnome.org/action/edit/ThreePointThirteen/Features/BrowseDMSPhotos">project</a> viz. Browse DMSes
from photos, the schedule line of the project changed slightly. Discussion with
the design team concluded that the new media servers would be added in the
gnome-online-accounts itself which means that I need to write a new provider for
media servers in gnome-online-accounts as an addition to my earlier proposed
project architecture which only included writing a miner for browsing DMSes and
then using the miner from the gnome-photos application.</p>
<p>So, I have been working on writing a new MediaServer provider currently in
gnome-online-accounts. I am using GtkListBox to show the currently available
Media Server devices on the network. There are plans to include the support for
AirPlay also in the future which would require extending this media server
provider. Currently I am only working for adding support for DLNA Media Servers.</p>
<p>The media server provider in g-o-a uses dleyna-server DBus API in the backend to
accomplish major part of its task. It calls the dleyna-server methods to get all
the servers around and then probe each one of them for their properties.</p>
<p>You can have a look at my code in a separate WIP branch I am maintaining over at
github
<a href="https://github.com/pranavk/gnome-online-accounts/tree/mediaprovider">here</a>. The
branch is quite buggy at the time of writing this. It just make things work as
of now. I am scrubbing and improving upon the code gradually.</p>
DLNA Media Server support in gnome photos2014-04-25T00:00:00+00:00http://pranavk.me/open-source/dlna-media-server-support-in-gnome-photos
<p><a href="https://wiki.gnome.org/Design/Apps/Photos">Photos</a> is a wonderful application available in GNOME. Its written in C. It
helps you browse all the available photos in your standard directories (home,
Picture, Documents etc.). You can create new collections of photos, mark your
good ones as favorites.</p>
<p>As a part of my GSoC 2014 <a href="https://wiki.gnome.org/action/edit/ThreePointThirteen/Features/BrowseDMSPhotos">project</a> under GNOME, I will be extending the code of
this application to add a new feature under GNOME i.e to add the support of DLNA
Media Servers.</p>
<h2 id="what-is-dlna-media-server-">What is DLNA Media Server ?</h2>
<p>So, lets talk about what a DLNA Media Server is ? There are lot of devices these
days around us that are DLNA Media Server (or DMS). These can be cameras,
smartphones, laptops etc. These servers serve the content to Digital Media
Renderers (DMR). The job of DMR is to get the content from the DMSes and render
them. So if the media content is a music, DMR would play it; if its a photo, it
would just show it. DMR can be your TV, smartphone etc.</p>
<p>My GSoC project this summer would enable gnome-photos to browse the content of
all the DMSes available around and show the content in the
application. Additionally, it can also push the content served by DMSes to DMRs
available on the local network. So as a result it can also act as a Digital
Media Controller (DMC). The application already had few DMC capabilities before,
this project would just extend them after writing the support for DMSes.</p>
<h2 id="available-approaches">Available approaches</h2>
<p>There are two approaches I can follow to make the gnome-photos application
browse DMS contents.</p>
<ul>
<li>
<p>I can use the existing <a href="https://bugzilla.gnome.org/show_bug.cgi?id=707346">grilo
plugin</a> to do the same.</p>
</li>
<li>
<p>I can make use of <a href="https://github.com/01org/dleyna-server">dleyna-server</a>. It
is a DBus service that provides an API to browse the contents of the DMS
easily.</p>
</li>
</ul>
<p>Pondering over above two options, I think using dleyna-server directly would be
a better option since using grilo plugin is still in development stage and might
not be reliable in the long run. Moreover, grilo plugin doesn’t provide
support such as async calls and cancellable API.</p>
<h2 id="current-progress">Current Progress</h2>
<p>Making the dleyna-server my choice, I installed the dleyna-server
alongwith its dependencies (dleyna-core, dleyna-connector-dbus) and launched the
dleyna-server-service. The API it provides can be easily viewed using d-feet. So
now implementing DMS in gnome-photos reduces to just calling the functions
provided by the API from dleyna-server dbus service. Using the dbus services in
gnome-photos requires playing with
<a href="https://developer.gnome.org/gio/2.29/gdbus-codegen.html">gdbus-codegen</a>.
My mentor gave me a <a href="https://bugzilla.gnome.org/show_bug.cgi?id=726919">bug</a> to
solve related to gdbus-codegen. I solved the bug and learnt how I can use
dbus services from the application.</p>
<p>My next steps would involve making use of dleyna-server-service’s API from the
application, browse its content and show them up in gnome-photos. But before
that, there are lot of decisions especially UI decisions that needs to be taken.
Eg: What if
there are lot of DMSes available on local network. The user might not want
contents of all of them to be shown in the application. The application should
give the user flexibility and full control over browsing content from DMSes
around in easiest possible interface possible. I need to discuss this with GNOME
UI design team for whats best in this case.</p>
<p>I am currently working on it and will update you soon about my progress.</p>
Google Summer of Code 20142014-04-23T00:00:00+00:00http://pranavk.me/open-source/google-summer-of-code-2014
<p>I have been accepted at Google Summer of Code 2014 program. I am thankful to
Google. This is my first GSoC project and I am looking forward to give my best
shot. I am grateful to my mentor <a href="https://wiki.gnome.org/DebarshiRay">Debarshi Ray</a> who
has been very helping to me for my intial patches and first steps into GNOME community.</p>
<p>My project will incorporate a new feature in gnome-photos to support DLNA
Media Servers. After the completion of the project, gnome-photos application in
gnome would have this wonderful capability to browse all DLNA Media Servers
available on the local network. I am excited about working on this project</p>
<p>Here is the <a href="https://wiki.gnome.org/Outreach/SummerOfCode/2014/Projects/PranavKant_PhotosDLNA">wiki</a>
of the project.</p>
<p>I will be posting updates about the project on my
<a href="http://pranavk.github.io">blog</a> under the tag <code class="language-plaintext highlighter-rouge">gnome-soc</code>.</p>
<p>I hope it will be a great summer ahead full of learning. I also wish other accepted
students good luck with their projects and a wonderful summer of code ahead.</p>
<p>Good Luck !</p>
My first steps into open source2014-04-02T00:00:00+00:00http://pranavk.me/open-source/my-first-steps-into-open-source
<p>I have been a huge fan of open source softwares for the last two and half
years. I have been using Linux during this whole time and have been trying to
explore more and more open source softwares available in the market. That means
that now to do any task, however complex, I try to find an open source
alternative to any existing easily available software like Adobe Photoshop. I
know using Photoshop but GIMP is even better.</p>
<p>Contributing to open source softwares is even more fun than using them. When you
see your code pushed to master and being used by thousands and sometimes
millions of users, you feel accomplished. Its been over a month that I have
started contributing to open source organisation including Mozilla and GNOME. I
have also applied for a GSoC 2014 project under <a href="http://www.gnome.org/get-involved/">GNOME</a> that proposes to extend
the <a href="https://wiki.gnome.org/Apps/Photos">gnome-photos</a> functionality by incorporating the support for <a href="http://en.wikipedia.org/wiki/Digital_Living_Network_Alliance">DLNA Media
Servers</a> in gnome-photos. The other contribution of mine being in <a href="http://mozilla.org">Mozilla</a> was
also a wonderful experience. I learnt alot during the whole process of writing
the patch for Mozilla. The patch exposes the raw data functionality to UDP
Socket Messages interfaces. It was listed as a good-first-bug but I think the
patch complexity was good enough to not to list it as a good-first-bug.</p>
<p>Apart from that, I also started working on a standalone from scratch desktop
client for bugzilla viz. <a href="https://www.gitorious.org/bzdesk">bzdesk</a> in C using GTK+. I will start working on it in
my free time since the project requires much of my time.</p>
<p>Working for above mentioned organisation was a wonderful experience. Both
Mozilla and GNOME are having excellent infrastructure, documentation and
community for newbie developers. #gnome-love channel on irc.gnome.org is very
helpful. They embrace the new developers trying to solve bugs for GNOME. Similar
is the case with Mozilla Community. The Mozilla Community is huge, you will find
variety of people working on various things. The architecture of Mozilla is also
complex as compared to GNOME. GNOME has easy to understand architecture. GNOME
has got variety of modules that provide a wonderful user experience.</p>
<p>It lures me to contribute to open source softwares. I am in love with GNOME
community and has plans to contribute to GNOME. The <a href="http://www.google-melange.com/gsoc/homepage/google/gsoc2014">GSoC 2014</a> project under
GNOME if gets approved will be my first significant contribution to GNOME and
ultimately open source.</p>
Loop Devices in linux - Mount disk images2012-11-28T00:00:00+00:00http://pranavk.me/linux/loop-devices-in-linux---mount-disk-images
<p>Well one have programs out there for windows to mount ISO images
etc. to the virtual drive. You can then use those to perform task that
might not be possible without mounting the image. Softwares like
Daemon tools and PowerISO are very good examples of such softwares.</p>
<h2 id="loop-devices">Loop Devices</h2>
<p>Loop devices are pseudo devices available under linux commonly found
under /dev/ with the name of loopN where N is a number like 0,1,2,3,4
etc.</p>
<h2 id="mounting-a-disk-image">Mounting a disk image</h2>
<p>You can make use of some commands to attach an ISO image first with
the loop device and then mount the loop device like you do mount any
other device in your computer.</p>
<p>To revert to the previous state you do the reverse i.e you first
unmount the device and then you detach the image from your loop
device.</p>
<p>There is this utility <code class="language-plaintext highlighter-rouge">losetup</code> available in linux systems which is
used to attach the disk image with the loop device. You require sudo
priviliges to do so. Lets assume you have an ISO image in your home
folder with the name <code class="language-plaintext highlighter-rouge">Ubuntu.iso</code> and you want to mount it like you
have made a DVD or CD out of this image first and reading it then. Do
following :</p>
<p><code class="language-plaintext highlighter-rouge">sudo losetup /dev/loop0 ~/Ubuntu.iso</code></p>
<p>This will attach the ISO image with the loop device. If you are using
a GUI and a file manager like nautilus, you can see that this Ubuntu
is added under devices in nautilus, this is because the nautilus has
subscribed to udev(which is responsible to tell various information
about the attached devices) which keep telling it about the devices
that gets registered with it.</p>
<p>Finally to mount the device, you can simply click on the device shown
in nautilus to mount it and see its contents, or you can mount it the
traditional way in CLI as :</p>
<p><code class="language-plaintext highlighter-rouge">sudo mount /dev/loop0 /mnt/</code></p>
<p>This will mount your image under <code class="language-plaintext highlighter-rouge">/mnt</code> and you should be able to read
it.</p>
<p>To revert the whole procedure, first unmount the image as :</p>
<p><code class="language-plaintext highlighter-rouge">sudo umount /dev/loop0</code></p>
<p>and then detach the ISO image from the loop device 0 as :</p>
<p><code class="language-plaintext highlighter-rouge">sudo losetup -d /dev/loop0</code></p>
<p>and you are done. Everything back to normal !! Enjoy mounting images
without any need of external software.</p>
Introduction to GPG - GNU Privacy Guard2012-11-28T00:00:00+00:00http://pranavk.me/cryptography/introduction-to-gpg---gnu-privacy-guard
<p>GPG is GNU privacy Guard and it is a software made over OpenPG standards. The terms GPG and OpenGP are the terms that are used too often interchangably but one must understand that these two terms are different. One is a standard and other is a software made over that standard.</p>
<p>If you are new to public key cryptography, I would like to showcase the concept here. Public Key cryptography is a very secure way to share messages between two parties. The transmitter transmits the message after encrypting it with a special key called ‘public key’. This public key as the name tells is public and can be shared with any other. The other key is ‘private key’. The private key as the name tells should not be made public to any person. It should be kept private with you only. Now anyone having the copy of your public key can encrypt the messages and send messages to you. Since we are using public key cryptography, that encrypted message can only be decrypted by the private key. And since you are the only person having that private key, you are the only single person in the world who can decrypt those messages sent to you encrypted using the corresponding public key. Hence it is a very secure method of sharing extremely sensitive information.</p>
<p>There is one more concept of symmetric keys. In this system, the same key which is used for encrypting the data is used for decrypting the data, so its less secure than the public key cryptography.</p>
<h2 id="setting-up-your-gpg">Setting up your GPG</h2>
<p>I will demonstrate how to setup the GPG in ubuntu. But the process should be same in any other distro since the software I will be using is gpg and it is same in other distros too.</p>
<p>First install the gpg software using your package manager. For ubuntu users :</p>
<p><code class="language-plaintext highlighter-rouge">sudo apt-get install gpg</code></p>
<p>After installing this, you need to generate your key. To do so do this :</p>
<p><code class="language-plaintext highlighter-rouge">gpg --gen-key</code></p>
<p>It will ask you the required information. Since the program is too interactive, you should be able to figure it out what it is asking and provide it the necessary information. It will ask you your basic information like Real name (it is recommended that you keep your name same as it is on your passports etc. ), email-address etc. It will also ask for a passphrase, it is like a password to unlock your private key. Whenever you make use of your private key ( say to decrypt messages which inevitably require private key ), it asks for the passphrase to first unlock that private key and then use that to decrypt messages. It is recommended that you provide a good passphrase depending on how often you give your PC or laptop to other people to use. If you don’t allow anybody to even touch your PC, then there is not even a need to provide passphrase in which case you must make it very very sure that the private key inside your PC is not at all accessible to anybody via any means.</p>
<p>After all the required information, your keys will be generated. Now you can upload your public key to the keyserver for other people to have access to them. Doing so will allow them to send you encrypted messages in case they have to send you sensitive information which can only be decrypted by you having that private key with you.</p>
<p>There are websites like <a href="http://www.biglumber.com">BigLumber</a> which allow you to list your public keys and allow signing of your public keys by others. Now lets see what this signing of public key means.</p>
<h2 id="increasing-web-of-trust">Increasing web of trust</h2>
<p>Signing of public keys by other’s private key increases your web of trust. If there are more people who have signed your public key using their private key, it means that they verify that the person who is claiming to be you is only you. Any person doing such verification should verify the details with any governmental identities before signing the public key. On <a href="http://www.biglumber.com">BigLumber</a>, you can find the nearest listings of individuals who can sign your public key that would ultimately increase your web of trust.</p>
<h2 id="signing-documents">Signing Documents</h2>
<p>You can also sign other documents that require owner authenticity. For eg : You are submitting a journal and people at other’s end need to verify that it came from you. Since the journal is digitally signed by you, the people at other end can verify this fact by using your public key.</p>
<p>You can go to <a href="http://www.biglumber.com">BigLumber</a> to start finding people to sign your keys, upload your public key to public servers etc. and do lot more stuff.</p>
Saving Buffered Google Chrome Videos to Disk2012-11-09T00:00:00+00:00http://pranavk.me/general/saving-buffered-google-chrome-videos-to-disk
<p>First of all, I must tell you that I will be talking exclusively about
Linux only. However with only small bit of headache you should be able
to copy the same procedure and implement the same thing in other
Operating Systems also like MacOS and Windows.</p>
<h2 id="overview">Overview</h2>
<p>First of all you should have cache enabled in your google chrome
browser for this trick to work. Just go to the browser settings and
then enable the cache. If it is already enabled you can increase the
cache limit to something appreciable if you watch big videos so that
whole of it resides in the cache itself and then you can save even
large buffered videos to your disk and watch them later.</p>
<h2 id="procedure">Procedure</h2>
<p>The cache folder of your google chrome browser resides in your home
directory. There is a folder named <code class="language-plaintext highlighter-rouge">google-chrome</code> in there. Enter
into this folder and then enter into a directory named <code class="language-plaintext highlighter-rouge">Profile
1</code>. There are two directories inside that with name <code class="language-plaintext highlighter-rouge">Cache</code> and <code class="language-plaintext highlighter-rouge">Media
Cache</code>. Your media files, all pictures, music clips and video buffered
resides there. The problem is that its really hard to find which one
of them is the video file, which one of them is html page, script,
song or picture.</p>
<p>Go to this folder as :</p>
<p><code class="language-plaintext highlighter-rouge">cd ~/.cache/google-chrome/Profile\ 1/</code></p>
<p>Video files usually resides in <code class="language-plaintext highlighter-rouge">Cache</code> folder. So go inside that
directory.</p>
<p><code class="language-plaintext highlighter-rouge">cd Cache</code></p>
<p>You will see a lot of files in there. Just sort them by date and the
latest videos buffered should be there. Just match few of the files
with the matching timestamp. You can check the type of those files
with a <code class="language-plaintext highlighter-rouge">file</code> command as :</p>
<p><code class="language-plaintext highlighter-rouge">file f_00564e</code></p>
<p>This will output the type of the file mentioned above. If its a video
file it tells you its a video file. You can check few of the matching
files with the timestamp and then play the video file with default
media player.</p>
<p>If you are using gnome, you can open the file like this with the
default media player to run that type of video, song or anything.</p>
<p><code class="language-plaintext highlighter-rouge">gnome-open f_00564e</code></p>
<p>If you are using kde, then use this :</p>
<p><code class="language-plaintext highlighter-rouge">kde-open f_00564e</code></p>
<p>This command shall open the default application with which that file
should be opened.</p>
<h2 id="scope">Scope</h2>
<p>I see that this procedure is too manual and requires a bit of
headache. This procedure can be made very simpler for day to day use
by writing a script that shall automatically detect the new file
entered into that directory with its type as say video and then offer
the user if he/she wants to save the same to hard disk for further
use.</p>
<p>I will be working soon on this to create such kind of tool. If you do
before me, do tell me by dropping me an email.</p>
Power saving and increasing battery backup in linux with nvidia GPU2012-09-23T00:00:00+00:00http://pranavk.me/linux/power-saving-and-increasing-battery-backup-in-linux
<p>If you are new to linux (any distro like ubuntu and fedora), you might have noticed that battery backup in linux is just beyond horrible. If you used other OS, the same machine that used to give you 4-5 hours of battery backup now suddenly starts giving you battery backup less than 1 hour. Moreover you notice that your laptop starts heating too much beyond the limit to the extend that you never experienced when using other OS.</p>
<p>I use Dell XPS L502x. My system is a dual boot with windows and other linux flavours. I love to play with my linux but the only thing that used to keep me away from my linux system is the horrible battery backup and horrible heating issues I had with my laptop when using linux.</p>
<h2 id="problem">Problem</h2>
<p>The most usual problem people have in their laptops due to which their laptop starts heating and consuming more power is their high performance graphic card (nvidia in my case). Nvidia has this wonderful technology called Optimus that turns the card off when the card is not in use in windows, hence consuming less power and emitting less heat. Unfortunately Nvidia has this driver with this technology for only windows and they do not want to write another driver for linux users as they think market here is gonna help them anyway economically since the number of linux users are small. Nvidia has also officially refused to have any such Optimus support for linux platform.</p>
<h2 id="solution">Solution</h2>
<p>What you can do is to switch off your graphic card manually so that it stops working for the current session in your linux. The battery backup will suddenly increase by more than 150% and you will be having no heating issues at all with your laptop.</p>
<h2 id="concept">Concept</h2>
<p>What we will do here in this tutorial is to insert a custom made module into the linux kernel and then run a script that disables the nvidia chip for the current session.</p>
<h2 id="tutorial">Tutorial</h2>
<p>First of all you will need to download this :</p>
<p><code class="language-plaintext highlighter-rouge">https://github.com/mkottman/acpi_call</code></p>
<p>You can download it using git as :</p>
<p><code class="language-plaintext highlighter-rouge">git clone https://github.com/mkottman/acpi_call.git</code></p>
<p>If you do not have git installed in your systems, you can download it as :</p>
<p><code class="language-plaintext highlighter-rouge">sudo apt-get install git</code></p>
<p>or</p>
<p><code class="language-plaintext highlighter-rouge">sudo yum install git</code></p>
<p>Before starting up all of this, I would like to show you the rate at which your battery is getting discharge so that you can see the difference beforeand after running the script.
Run the following command in your terminal to know the rate of discharging. Please make sure that the laptop is not being charged at the time of running this command below otherwise it will show you the wrong information.</p>
<p><code class="language-plaintext highlighter-rouge">grep rate /proc/acpi/battery/BAT0/state</code></p>
<p>Note this value.
The package downloaded already have the file README, so you can also follow the instructions from there but to make it more easy I am showing it off below :
Go to the acpi_call folder just downloaded. Make it and then insert the generated module <code class="language-plaintext highlighter-rouge">acpi_call.ko</code> into the kernel and then run the script <code class="language-plaintext highlighter-rouge">test_off.sh</code>. The script generated <code class="language-plaintext highlighter-rouge">test_off.sh</code> in my case was not executable so I had to make is executable first. All Commands are given below as :</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd acpi_call
make
chmod +x test_off.sh
sudo insmod acpi_call.ko
sudo ./test_off.sh
</code></pre></div></div>
<p>Second command compile the program for your platform and generates two files <code class="language-plaintext highlighter-rouge">acpi_call.ko</code> and <code class="language-plaintext highlighter-rouge">test_off.sh</code>.
Third command make the test_off.sh file executable if it is not (if it is, then still no problem in running this command).
Next command <code class="language-plaintext highlighter-rouge">insmod</code> inserts the module acpi_call.ko into the linux kernel.
Lastly, test_off.sh runs the script and does the main thing for you.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Note: Please note that you need root access here, so using sudo is necessary here in this case.
</code></pre></div></div>
<p>You will see some sort of stuff which will print in your screen and then one of it will say ‘works’ and you are done. Now see the rate at which battery is discharging now.</p>
<p><code class="language-plaintext highlighter-rouge">grep rate /proc/acpi/battery/BAT0/state</code></p>
<p>If everything worked fine the discharge rate should decrease rapidly after some time of running the script. This all worked on my Dell XPS L502x with dual graphic card (Nvidia and builtin Intel HD graphics) and also on some model of ASUS as author of the script has mentioned in his README. I hope this would also work on your model having dual graphic card.</p>
<p>Currently, you have to run following two commands everytime by going to the same acpi_call folder to make this trick work.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo insmod acpi_call.ko
sudo ./test_off.sh
</code></pre></div></div>
<p>Or if you do not want to do this, you can edit your .bashrc and put the things in it to make it work everytime you startup your system.
Or you can insert the module acpi_call.ko in your linux kernel modules in /boot/grub/ and load it by playing with /etc/grub.d/.</p>
<h2 id="more-help-regarding-battery">More help regarding battery</h2>
<p>If you want to know more about battery conservation in linux and want to save more battery, its highly recommended for you to have a look over this website :</p>
<p><a href="http://www.lesswatts.org/">Lesswatts.org</a></p>
Parsing large XML files efficiently with Python2012-07-04T00:00:00+00:00http://pranavk.me/python/parsing-xml-efficiently-with-python
<p>Parsing XML with python is not a difficult task if you have some familiarity with python and any of the library that deals with providing you methods to parse XML. But what if you want to parse very very large XML files. Probably you are here searching for this only because you were trying to parse a very large XML files and your CPU was not able to handle it or you have memory issues.</p>
<p>I was also searching for this thing when I was trying to parse a very large XML file. I was trying to parse a file whose size was in GB. And whenever I start my python script, it just gets killed everytime. Then I came across some scholarly article written on efficiently parsing XML files.</p>
<h2 id="concept">Concept</h2>
<p>Basically when parsing very large XML files, problem is that the traditional parser will hold the information about the parent and its child and everything. So as you start reaching the end of the while it is keep storing everything inside the memory and which means that you might get out of the memory.</p>
<h2 id="approach">Approach</h2>
<p>What you basically got to do is to delete the references of parents and children as you parse the file from top to bottom. We will be accomplishing this with the help of <code class="language-plaintext highlighter-rouge">lxml</code> module in python. If you don’t have it just search for a package named <code class="language-plaintext highlighter-rouge">python-lxml</code> if you are in Ubuntu or search for the similar package if you are on any other distribution. You can also download this package from <code class="language-plaintext highlighter-rouge">pip</code> or <code class="language-plaintext highlighter-rouge">easy_install</code> whatever you like.</p>
<p>So, unlike the traditional parsers, what lxml will be doing is to record the events as it parses the file and probably I think it don’t capture the whole file in the memory, it just reads the file in chunks. These events that I just wrote above are events like ‘start’, ‘end’. If you define that all the events ‘start’ should be captured, it would give you an element at that point corresponding to that event. Similarly if you say it to capture all the ‘end’ events, it will give you element corresponding to that. You can define both the events also in which case it will give you corresponding element everytime it hits that event. You can also define the element itself to be captured, so that you ignore all the shit and take out only the useful ones.</p>
<p>One thing you have to keep in mind is that a ‘start’ event won’t be having any information about its child. It will be having information only about its parent and the attributes of the element.
And the ‘end’ event will be having all the information about its parent, childs and the content.</p>
<p>Following code demonstrates the whole process :</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>from lxml import etree
context = etree.iterparse(filename, events=('end',), tag='nodes')
for event, element in context:
<Do the stuff here you want to do with the element. This element has all the information about the content of 'node' tag and its child elements because in the context above I have ordered it to capture only 'end' events for me. So it captures the event when the parser hits the end of the node tag i.e </node> tag or <node/> if it has no content inside it.
element.clear()
#This line tells that you won't be accessing any child elements of the element now. So the parser can just throw them off.
#Now clearing the parent elements of the 'element'
while elem.getprevious() is not None:
del elem.getparent()[0]
# 'not None' is used here because if the element you are parsing is root itself, then it will raise an exception because there is no parent for it, so you might have to handle that exception too in that case.
</code></pre></div></div>
<h2 id="parsing-osm-data">Parsing OSM data</h2>
<p>I used this script to parse the <a href="http://www.openstreetmap.org">OSM</a> data to capture all the nodes that the XML file has. You can add the stuff inside this script to capture whatever you want to take out of the OSM data. For eg: If you want all the ‘atm’ in your town, you can run this script and capture all the ‘atm’ in your area alongwith their latitute and longitude values provided that your city is not too remote and OSM has enough data about your city and people from that contribute the data just as they do in google maps to OSM also.</p>
<p>Frankly speaking everyone should contribute dta to OSM rather than google maps. Its free and open source. You can download their huge data in compressed form ( about 21 GB or so ) from their website but can you do such thing with google ? Let me know if you do.</p>
<p>You can download the OSM data <a href="http://planet.openstreetmap.org/">here</a> and see what they provides.</p>
<p>Code :</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>from lxml import etree
context = etree.iterparse(filename, events=('end',), tag='node')
for event, elem in context:
Id=elem.attrib['id']
lat=str(elem.attrib['lat'])
lng=str(elem.attrib['lon'])
for c in elem:
if c.attrib['k']=='created_by' or c.attrib['k']=='source': # We don't want such tags to keep in our DB.
continue
key=c.attrib['k'] #These are basically the tags inside the nodes having key and values
val=c.attrib['v']
# You can do more filtering here if you want specific keys or values. Like if you want only 'atms' then filter the val with 'atm' using conditions.
# Store the information in file or db or wherever you wanna use it.
elem.clear()
</code></pre></div></div>
while elem.getprevious() is not None:
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> del elem.getparent()[0]
</code></pre></div></div>
<p>And then you are done. You have all the important information you need out of the OSM data and that too in a fast way. If you wanna get more and more faster you can make the program multithreaded.</p>
NoSQL databases2012-07-01T00:00:00+00:00http://pranavk.me/database/nosql-databases
<p>Many of you might have worked with SQL databases, that basically just stores your entries in form of a table. You can then manipulate those entries or records the way you like. There are several commands that help you manipulate and analyze the data in a SQL database in a wonderful way. You can even join the tables/relations and do operation on them. MySQL, Postgresql are some of the SQL database servers that you can use for doing making up your database in a SQL format. Well all these are Relational Database Management System (RDBMS)</p>
<h2 id="nosql---yes-please-no-more-sql">NoSQL - Yes, please no more SQL.</h2>
<p>But what is this NoSQL ? As the name suggests its a database with no more SQL. Yes it does not use SQL as its query language. Basically a NoSQL database uses a key value store to store its entries. So you can imagine a container having keys in it which are unique from each other. And each key have values associated with them. Remember that no two keys can be same. If you try to insert any key later that matches the key already in the databse earlier you are going to overwrite that.</p>
<p><strong>NOTE</strong> : You can search your NoSQL database for keys. Remember that you can never ever search your NoSQL database with values. NoSQL database just cares about what the values are, they don’t care what the mess you have created inside the values. So you should structure your program keeping in mind this always.</p>
<p>There are several NoSQL databases available today in the market. MongoDB, redis etc. Redis is one of them and I love <a href="http://www.redis.io">redis</a> because of its diverse data structures that it provides. Usually in most of the NoSQL databases, the keys are just simple strings and the value can be any datastructure. For eg: In redis the value can be advanced data structures like hashtables, sets, sorted sets ( that keep the things sorted always using a unique id associated with each value). So you can have one key and a set associated with that key or a hashtable associated with that key. Or if you want to go more complex, you can have a nested hast tables or nested sets or sorted tests. Redis is basically used to cache the things in a nosql fashion but don’t assume that it just cache the things and forgets everything after restarting a server. Its persistent. It keeps on making copy of data to the harddisk also. MongoDB is also a NoSQL database but it cannot act as cache as it stores everything in the hard disk itself.</p>
<h2 id="why-do-i-need-a-nosql-">Why do I need a NoSQl ?</h2>
<p>Well there are situations when you have data of variable complexity. Like you can have data of cities but its not gurranted that all the cities have same amount of information associated with them. Then if you go with a RDBMS you will have to create columns and many of the columns for many records will be left empty because the city associated with that record simply doesn’t have that information associated with it.</p>
<p>But if you are using a NoSQL database you can have a key with city name and a hashtable associated with the key which provides you this feature of storing different sized data in it. Your hashtables for each city can be of different size. If your NoSQL database doesn’t provide such good datastructure support unlike redis then you may have keys for each of the feature like in this format :</p>
<p><code class="language-plaintext highlighter-rouge">Bangkok:Population</code></p>
<p>which will again give you the flexibility of varying features associated with each city. Just don’t worry about the mess that will be created by doing this. You don’t need to worry about it unless you are sure that it will be retrieved successfully. In this case you just have to search for the keys starting with a city name and then search for the feature after the column. So search is easy as you see. NoSQL cares for all of the mess that you create inside it. Just don’t worry about it.</p>
<p>So every key value will be stored in this format :</p>
<p><code class="language-plaintext highlighter-rouge">"key": "value"</code></p>
<p>This can be as simple as this :</p>
<p><code class="language-plaintext highlighter-rouge">"Madrid": "Spain"</code></p>
<p>or can be as complex as the value being a hashtable.</p>
<h2 id="nosql-support-in-rdbms">NoSQL support in RDBMS</h2>
<p>If you feel that you don’t want to compromise the power to SQL in RDBMS but at the same time you want to use the feature that SQL provides you have the support of NoSQL in your RDBMS also. I used postgresql(a RDBMS like MySQL, if you haven’t used it) and it has this wonderful extension thing. You can create extensions for it. The extension that provides this support is called <strong>hstore</strong>. You can then simply create your tables with columns and make one of the column’s datatype as hstore. The column will then accept the key:value pairs which you can then manipulate afterwards by searching that column with their keys. Each record in your relation/table will be having its own key:value pairs.</p>
<p>So all you have to do is open the psql by logging in as a postgres user and then open up the psql by typing <code class="language-plaintext highlighter-rouge">psql</code> in the terminal.</p>
<p>Then create the extension as :</p>
<p><code class="language-plaintext highlighter-rouge">create extension hstore</code></p>
<p>Remember that you have to install the postgresql’s contrib package from your repository or from their site before creating this extension.</p>
<p>Then you can create the table as :</p>
<p><code class="language-plaintext highlighter-rouge">create table foo(id integer primary key, name varchar(50),population integer, data hstore);</code></p>
<p>and you are done now you can enter the key:value pairs in the data column for each record you enter in the database. For eg: For the above table structure you can add records like this :</p>
<p>You can denote the key value pairs like this :</p>
<p><code class="language-plaintext highlighter-rouge">"population"=>"555232"'</code></p>
<p><code class="language-plaintext highlighter-rouge">insert into foo values(1,"delhi",124333,'"country"=>"india","language"=>"hindi"');</code></p>
<p>This will populate your postgresql table with data field variable for each of your record. You can then search for the keys in this column.</p>
Getting started with TopCoder2012-06-01T00:00:00+00:00http://pranavk.me/programming/getting-started-with-topcoder
<h2 id="what-is-topcoder-">What is TopCoder ?</h2>
<p>Well you might have heard of this famous website when exploring the web here and there. I also found it when exploring the google career’s website. They still remember they had mentioned it somewhere that if you participate in online programming competitions like Top Coder then you can mention your score in your resume while applying for google. Something like that. I don’t remember it exactly.
Well coming to the point, top coder is an online programming contest where you can participate and increase your programming skills. They have wonderful set of problems and conduct online programming competitions periodically. The range of competitions vary from designing stuff to programming stuff, you can puruse your interest. If you win some competition etc. they also give your good amount of money. So its better than freelancing.
They also have a rating system - in which they rate each of their member according to the performance he shows in the competitions. This gives a boost to the participant to have a high rating. Best programmer’s are highest rated and wherever you will see their username its written with a red-color, which implies a red rated coder - the highest rating in TopCoder platform.</p>
<h2 id="getting-started-with-topcoder">Getting Started with TopCoder.</h2>
<p>For all online programming competitions, you atleast need to be registered at Top Coder’s Website. After that for different competitions you again have to register separately. There are two many kinds of programming competitions hosted on Top Coder as I said before. After signing up to the Top coder website, you will have to go to the community portal, there you can see all the latest happening in TopCoder Community. TopCoder is also a wonderful website for someone who wants to learn programming, data structures and algorithms. You can find one of the best programmers in the world competing with each other at TopCoder and especially their tutorials on the TopCoder’s website about how they do tackle the programming problems and how they view a particular problem. So its really nice to hang out with TopCoder’s community.</p>
<p>As you are reading this article and you have reached up to here, so I expect its your first time at TopCoder, so I would be telling you the most basic and most major things at TopCoder about how to get started, about the major competitions that run on that website. Rest small things you will catch up when you get a hold of it.</p>
<p>There are two kinds of important programming competitions at a broad level related to Algorithms or I must say programming. You can reach those by going to the community portal on the TopCoder website and then navigate to competitions and then to algorithms. There you will be finding two types of Matches to compete in.</p>
<ul>
<li>Single Round Matches (SRM)</li>
<li>Marathon Matches (MM)</li>
</ul>
<p>Both are important in their own way. For first kind of match you need to download a JAVA applet from the TopCoder website in which you will compete. The applet is called the ‘TopCoder’s Arena’. I love it. Besides giving you a platform to compete in SRMs, it give you a platform to interact with big programmers and to practice for SRM, MM and many other competitions. Whenever SRM begins, you need to come to this arena and then after logging in to the arena, you will have to enter the SRM room assigned to you. Remember as I said above you will have to register for the match before competing in it. The registration begins 3 hours before the SRM starts and closes 5 minutes before SRM starts. You can further find many tutorials on the TopCoder website about SRM matches and Marathon matches, my aim was to introduce you to the TopCoder, one of the best online programming arena for programmers. I hope you would love to code at TopCoder.</p>
<p>Hope you will have a high rating ahead in TopCoder.</p>
Meet your Hard Disk Drive2012-05-22T00:00:00+00:00http://pranavk.me/general/meet-your-hard-disk-drive
<p>First of all a hard disk consists of many platters stacked together, a platter is a disk shape object. Now comes the head, the head reads the data written on the platters and in between each platter there are two heads reading the data on one of the faces of two different platters.
The head moves from outer to the inner edge when reading data. Each platter consists of tracks which are further divided into sectors. The tracks are number from 0 to usually 1023 in a standard Hard Disk and each track can have many sectors. One thing to note here is : when the platter will be rotating, in inner edges the head will be covering one rotation over the platter in lesser time as compared to outer edges (if the data is uniformly distributed over the entire disk). As you might know from the high school physics that velocity equals product of radius and angular velocity (v=wr), so in one rotation the head will read more data in those tracks which are away from center of the disk. To compensate for this difference, the density of data that is stored in the outer edges is less so as to make the number of sectors in each of the track constant because no no physical difference is desired when designing the hard disk. So result of making this modification is that same amount of data can be read from any head position over the same period of time.</p>
<p>As said above hard disk consists of platters which further consists of tracks and so on. If we consider the same track number of each platter in the hard disk then that forms a cylinder.</p>
<p>Each sector of the disk contains a maximum 512 bytes of data which has been standardized. <strong>E.g :</strong> to store 1000 kb of data, we need two sectors allocated for this purpose which amounts to 1024 bytes of allocation in the hard disk.
Now there is one more thing, one more standard plan : As said above that there are many platters out there in a hard disk, so one side of the top platter (opposite to the side facing the other platter) is used to write the hardware track positioning information. This is written during the assembly of Hard Disk in the factory. The system disk controller, reads this data to place the drive heads in the correct sector position.
So if there is a stack of 1000 platters then 1999 faces will be used to write the data and will be available to the Operating system and one remaining face of one of the platter is used by the system disk controller to access the data on other platters.</p>
<h2 id="data-fragmentation">Data fragmentation</h2>
<p>Many of you might have heard of data fragmentation, when defragementing your hard disk. Well what does that mean ? Lets have a look.</p>
<p>What if the file size to be stored is 900 bytes, in that case obviously two sectors will be used by the file system to store this file. The contigous sector allocated for storing this file is called a cluster.
Now let us assume if later on more data is appended and file size grows to 1500 bytes then though it should take 3 sectors (because the 1600 < 512 x 3 =1536 ), but it will take 4 sectors since the number of sectors allocated because size of a cluster is always a power to 2, so in above case another two sectors will be allocated so that cluster now consists of 4 sectors.</p>
<p>Now, the most <strong>important concept of fragmentation.</strong>
What if the two sectors that need to be allocated are not available adjacent to it, in that case the two sectors can be allocated anywhere in the disk, just anywhere, doesn’t matter which cylinder, which track, which sector. In that case the file stored under this non contigous manner is called fragmented. This can delay the time used to open the file as heads needs extra time to travel to different head positions.</p>
<p>Bigger cluster size can be allocated by the file system to reduce fragmentation but this may lead to a situation where much of the space becomes unused and you may run out of storage.</p>
Blogging for Hackers2012-05-21T00:00:00+00:00http://pranavk.me/general/blogging-for-hackers
<h2 id="a-new-platform--jekyll">A new platform : Jekyll</h2>
<p>Well if you are looking for some blogging platform for yourself and you are a hacker (I don’t mean the hacker those who hack into computer systems, I mean developer by hacker) or want to be a hacker, then you have come here to the right place. I also used to explore here and there on the internet to find a good platform for myself for blogging. I used blogspot, found that it is too common these days, switched to wordpress, it was okay with wordpress for few days then I got bored and find that too normal. It was not giving me the power to fully customize the things, I don’t want to work with their made buttons, just wanted to creep inside those buttons. Then I found something on the internet, its Jekyll, yes you read it right. I read the stuff from here and there about it and now compiling here in my blog. Jekyll is a parsing engine, its not a language, its not a blogging platform, it just parses the file written by you and create a website for you. It was created by the GitHub founder ‘Preston-Werner’ for the GitHub pages because he want some secure way so that hackers couldn’t hack easily into github pages. Hence it is very secure. Jekyll is very young open source project and is in its intial stages but it has successfully attracted many hackers towards itself who are finding to create their new blog or finding a new blogging platform (you can easily transfer whole of your data to jekyll created website) just because they are bored with their older ones.</p>
<h2 id="jekyll-behind-the-scenes">Jekyll behind the scenes</h2>
<p>Jekyll server then parses all the file in the expected directory structure and then computes all the posts, pages, dates and everything and create a single ruby object named as site. All the data including posts, pages, categories and everything can be accessed using this object site. You don’t need to get into details for starting.</p>
<h2 id="jekyll--introduction">Jekyll : Introduction</h2>
<p>You can create both static and dynamic websites with it, I like static websites, because I don’t ever have so much time to maintain dynamically generated websites, managing security of dynamic websites from sql-injections and all that. And I think that if you are really blogging just for the sake of interest, just because you want to share thoughts, you also don’t have that much time to get yourself into dynamic web pages unless you are planning to have some business included in your blog and want to provide users with more facilities also apart from blogging. Remember, I don’t mean to say that in static webpages you won’t be able to add comment options and all that, you can surely run scripts and can make your website interactive using disqus etc. But still if you are not able to hold yourself and want to go dynamic, you can refer this :</p>
<p><a href="http://bionicspirit.com/blog/2012/01/05/blogging-for-hackers.html">Heroku and jekyll sitting on a tree</a></p>
<p>As I said above that Jekyll is nothing more than a parsing engine, so all the time you play with the files and folders, write your content in a different way so that Jekyll parses it and generate webpages in a way you want. There are several ways to install jekyll, but the most easiest one is ruby’s way.</p>
<p><code class="language-plaintext highlighter-rouge">gem install jekyll</code></p>
<p>Make sure you have the root access, if you have any problem in installing the jekyll, then may be you are missing some headers needed for it, so you have to install other headers. Just do a</p>
<p><code class="language-plaintext highlighter-rouge">yum install ruby ruby-devel</code></p>
<p>or just install these two packages ruby and ruby-devel to get started from your package manager.</p>
<p>Now you can intiate the jekyll server with a single command and it will parse all the material in the directory in which it is invoked and generate a site for you. But before invoking the jekyll server you need to make sure that the directory in which you are invoking it has the same structure of sub-directories the jekyll expects it to be. There are plenty of websites available from where you can have the right structure. This website is also publicly available at github here :</p>
<p><a href="https://www.github.com/pranavk/pranavk.github.com/">pranavk@github</a></p>
<p>I also forked it from bootstrap github account.</p>
<p>After the structure is correct you can invoke the jekyll server as :</p>
<p><code class="language-plaintext highlighter-rouge">jekyll --server</code></p>
<p>from the command line.</p>
<p>And then jekyll server will successfully create the website in site directory with an underscore prefixed in the same directory.</p>
<h2 id="websites-to-refer-for-complete-documentation">Websites to refer for complete documentation.</h2>
<p>So that was the quick overview of jekyll and a motivation to start blogging hacker’s way. This website is also created using jekyll, I also forked it from bootstrap and then modified it according to my needs.</p>
<p>Now from this point I don’t want to keep yourself stuck to this page only. Surely there are better documentation about creating a successfull blog out there on the Internet.You can go to :</p>
<p><a href="http://jekyllbootstrap.com">Jekyll Boootstrap - A website framework</a></p>
<p>for better documentation and having themes to choose from giving you more freedom. You will find almost everything out there to start your own website.</p>
<p><strong>Good luck !</strong></p>
File permissions in Linux2012-05-14T00:00:00+00:00http://pranavk.me/linux/file-permissions<h2 id="introduction">Introduction</h2>
<p>Permissions in Linux has a very deep influence on the security and people who know linux deeply. It may seem simple to the ordinary people who work under windows but for those who know little about it and have some experience playing out there with the files in Linux, they know the real meaning of what permissions in Linux means.</p>
<p>First of all permissions are also broken into two parts :</p>
<ul>
<li>One at a basic level.</li>
<li>Second at a higher level.</li>
</ul>
<h2 id="basic-level">Basic Level</h2>
<p>On a basic level we have three permissions - read, write, execute permissions for three types of users on the computer.
First type of user on the computer is the owner of the file or directory. Second type of users are those in that group which owns that file. And the third type of users are other users.</p>
<p>Since read(r), write(w) and execute(x) are defined for each of them, so basically there are nine total permissions on any file or a directory. Each of them can be represented by a bit.
If for any file/directory, a type of user has all the three bits on, means he has permissions of 111 on that file which when converted from binary to decimal means permission of 7. Similarly when only rw bits are on, it would mean 110 permissions which means permission of 6. Again when only r bit is on, it means permission of 4 only. So in a way to remember these easily we can remember 4 for r, 2 for w, 1 for x. And when all of them are ‘on’, just add them and this final result will be permission for the user under consideration.
So we can have different permissions for a particular file for three different types of users. E.g. : We can have 744 permission for a file which would mean that owner of the file/directory can read, write and execute (4+2+1) on that file. The users in the group which owns the file can only read the file since second value is 4 and all other users can also just read the file.</p>
<p>Now we will focus on how to implement this and I will be telling some commands for you so that you can have look at them and try on your localhost.
I am specifically focussing on filenames here just for the clarity. You can replace the filenames with the directory names also. Also I am assuming that the user to whom ownership will be set is mathews and the group to whom the group ownership will be set is admins.</p>
<h3 id="changing-the-ownership">Changing the ownership</h3>
<p>To change the ownership (user and group ownership both) for a file, we do as :</p>
<p><code class="language-plaintext highlighter-rouge">chown mathews:admins filename;</code></p>
<p>After this command, the owner of the file will be set to ‘mathews’ and the group owner will be set to the ‘admins’ for ‘filename’.</p>
<h3 id="changing-file-permission-via-chmod">Changing file permission via chmod</h3>
<p>The second command frequently used is changing the permissions for the file. You can do so as :</p>
<p><code class="language-plaintext highlighter-rouge">chmod 755 filename;</code></p>
<p>Here 755 is the permission set for the filename. Consider that the ‘filename’ we are using here is the same as we used in above chown command. So now after the execution of this chmod command, the mathews will be able to have all the read, write and execute permission on ‘filename’ while the users in the group admins will be able to have only read and execute permissions on the directory. Similarly all others users also can only read and execute the filename.</p>
<h3 id="some-important-points">SOME IMPORTANT POINTS</h3>
<p>Execute permission on a directory means that you can enter a directory. Write permission means you can create files in the directory. And read on a directory means you can ask for a listing of the files in the directory.
If write and execute permissions are set for a directory then that would automatically means that the specified user can delete files/sub-directories in the directory irrespective of all other things.</p>
<p>That was all about the basic level of permissions.</p>
<h2 id="advance-level-permissions">Advance Level Permissions</h2>
<p>Advancing to the permissions of a slightly higher level, there are three more permissions as :</p>
<ul>
<li><strong>SETUID</strong></li>
<li><strong>SETGID</strong></li>
<li><strong>Sticky Bit</strong></li>
</ul>
<p>The SetUid and SetGid is denoted by ‘s’ as compared to ‘r’,’w’,’x’ for files above. The sticky bit is denoted by ‘t’.
The SetUid bit ‘s’ replaces ‘x’ permission in the owner group of permissions i.e. if the owner is having rwx permissions and after that ‘s’ is applied to it, it would convert to rws. If owner is having rw- permissions and ‘s’ is applied to it, then it gets converted to ‘S’.
Similar is the case with SetGid except it all happens by replacing ‘x’ of the group permissions. So if both SetUid and SetGid are set for a file, then it would look like.</p>
<p><code class="language-plaintext highlighter-rouge">-rwsrwsr-x.</code>
or<br />
<code class="language-plaintext highlighter-rouge">-rwSrwSr-x.</code></p>
<p><strong>‘S’</strong> or <strong>’s’</strong> depends on the above mentioned rule.</p>
<p>After it sticky bit is defined for a file as a whole and it replaces ‘x’ of the other users permission. So if we apply ‘t’ to a file as mentioned above, it would look like.</p>
<p><code class="language-plaintext highlighter-rouge">-rwsrwsr-t.</code></p>
<p>If execute permission on others is set, it would be ‘t’ otherwise it would be ‘T’.</p>
<p>Now let us focus on how we can do this using commands.</p>
<p>To set the SetUid bit you have to append ‘s’ to the user(owner) of the file as :</p>
<p><code class="language-plaintext highlighter-rouge">chmod u+s filename</code></p>
<p>Similarly to set the SetGid bit, you use :</p>
<p><code class="language-plaintext highlighter-rouge">chmod g+s filename</code></p>
<p>The same can be accomplished using decimal notation also as :</p>
<p><code class="language-plaintext highlighter-rouge">chmod 4711 filename</code></p>
<p>where the first decimal digit now would mean the SetUid Bit and after it i.e 711 will mean the same stuff as we did in above basic file permissions.</p>
<p>Similarly SetGid would mean value of 2 and sticky bit means value of 1. So if you want all of them to be set for a file, you can do :</p>
<p><code class="language-plaintext highlighter-rouge">chmod 7711 filename</code></p>
<p>Now lets see what all of this means :
SetUid, SetGid and the sticky bit are defined differently for files and directories. Infact these are introduced to add flexibility to the file permissions thing. There are some drawbacks of basic file permissions that are overcome due to these three new bits.</p>
<h3 id="set-uid-set-gid--sticky-bit-for-files-">SET UID, SET GID , Sticky Bit FOR FILES :</h3>
<p><strong>SetUid :</strong> When this bit is on, the file when executed will be executed as if the user(owner) of the file is executing that file and it appears to the process that is triggered by the file that owner of the file is executing the file.</p>
<p><strong>SetGid :</strong> When this bit is on, the file when executed by any other user executes as if the member of the group that owns that file is executing that file.</p>
<p><strong>Sticky Bit :</strong> When this bit is on, it means that the the executable file will ‘stick’ to the memory even after its execution. Infact the executable is stored in the swap space so that when next time it will be launched, instead of loading it from the secondary storage to the primary storage, it gets directly loaded on to the primary storage from the swap space. This increases the performance and all those programs that are frequently used should have a ‘t’ bit on.</p>
<h3 id="set-uid-set-gid--sticky-bit-for-directories-">SET UID, SET GID , Sticky Bit FOR DIRECTORIES :</h3>
<p><strong>SetUid :</strong> When this bit is on, the sub-directories and the files inside that directory have the same owner as that of directory, but sometimes this bit is ignored on some systems. E.g: On Fedora, it is ignored by default.</p>
<p><strong>SetGid :</strong> When this bit is on, the sub-directories and the files inside that directory have the same owner as that of directory.</p>
<p><strong>Sticky Bit :</strong> When this bit is on, it do not let other users to delete files/sub-directories in the directory to which we have write access to. Earlier in the text we saw in the note, that if we have write and execute permission on a directory we can delete files/sub-directories in the directory irrespective of other conditions imposed on them. So this bit is helpful if we do not want that thing to happen. Usually on Linux system, this bit is on for /tmp directory so that other users cannot delete temporary files of any other users and only the user that created that file can delete or rename that file.</p>
<h2 id="more-permission-control">More Permission control</h2>
<p>After all of this there are more advanced things also that applies for permissions on files/directories. This next big thing is ACL. acl is a package that you can install through your linux distro’s package manager. This lets you to grant additional user access to some files.
After installing acl, you can set permissions for files/directories as :</p>
<p><code class="language-plaintext highlighter-rouge">setfacl -m u:mathews:r filename</code></p>
<p>This would give user ‘mathews’ additional read access on the file if ‘mathews’ is neither the owner of the file nor the memeber of the group that owns that file and comes under ‘others’ category and the permissions of others are set to ‘—’. So in that case we give user ‘mathews’ additional read access on that file.</p>
<p>Similarly to see what additional rights are imposed on that file we can see that as :</p>
<p><code class="language-plaintext highlighter-rouge">getfacl filename</code></p>
<p>So far, if you observed carefully, I used a ‘.’ after permissions as :</p>
<p><code class="language-plaintext highlighter-rouge">rwx-wx--x.</code></p>
<p>After you set acl permissions on a file you see a ‘+’ sign after the permissions as :</p>
<p><code class="language-plaintext highlighter-rouge">rwx-wx--x+</code></p>
<p>and you can see those using getfacl as mentioned above.</p>
<p>You can see the man-pages for getting help on getfacl and setfacl.</p>
Git : Getting started 2012-05-13T00:00:00+00:00http://pranavk.me/general/git-getting-started<h1 id="introduction">Introduction</h1>
<hr />
<p>Many of you might have wondered in the past what Git actually is and today you are here browsing the internet in search to learn what it is all about. Git is a revision control and source code management software and it is one of the popular Version Control system. Now you might ask what is this Version control system ? Well, if you are a novice then let me tell you that when programmers work on a big project, they often save the project many times a day, removing each and every bug as they find out. Now there are some situations when instead of removing a bug, a new bug is introduced and now you want to go back to previous versions and start working on from there. If you are not working under CVS this will be impossible for you, since you don’t have so much of backup directories and files out there to revert to. But CVS takes care of this thing so intelligently by taking the minimum amount of space to store your project and at the same time have the ability for your project to get reverted to any of the initial state. Each save in programmer’s language is called commit, so programmers made commits whenever they feel that they have resolved a new bug and that gets stored in Git’s history so that if anytime the author or anyone wants to access the previous version he can revert back all the states of his directories to a previous state. Each commit is accompanied by date, time and most important - a commit message which let people know what changes are included in this version.</p>
<p>In linux many situations occur when the user have to copy the source code for a software from the internet or to be more specific from the git repository. Infact most of the open source projects resides under git repositories so that it can qualify to be a open source project and everybody can access the code, everybody can make commits to it.</p>
<p><br /></p>
<h2 id="getting-started-with-git">Getting started with git</h2>
<hr />
<p>Now I will be telling you setting up git on your computer. I will be assuming that you are linux user. But if you are a Windows user then you might consider downloading Git for windows, just google it.</p>
<p>Install git using your package manager, use yum for rpm based linux or use apt-get for debian linux.</p>
<p><code class="language-plaintext highlighter-rouge">sudo yum install git</code> in rpm based linux.</p>
<p><code class="language-plaintext highlighter-rouge">sudo apt-get install git</code> in debian based linux.</p>
<p>Once it is installed you need to do some initial registration with git such as adding your username and email. This information is important and is used when you make commit to some software or submit patches. Add your name and email as follows :</p>
<p><code class="language-plaintext highlighter-rouge">git config --global user.name "yourusernamehere"</code></p>
<p><code class="language-plaintext highlighter-rouge">git config --global user.email "youremailid@yourmail.com"</code></p>
<p>This was the initial setup of git on your computer. Rest of the article now is a small demonstration of cloning the source code of a program into your computer and later on you will be told how you can initiate a git system on your localhost and work on it. I am telling this just to give you a basic feel of what git is.</p>
<p>To clone the source code from the internet you have to find git repository and then you will access that source code using git protocol. I assume in this tutorial that you are using a linux box and we are cloning a calculator application source code from github using git protocol. So first see if git is installed on your system, if not install it via yum or apt-get ( whatever you use to install softwares ). After installing git, you can type in the following command to clone the source code of this application.</p>
<p><code class="language-plaintext highlighter-rouge">git clone git://github.com/mgomes/GCalc</code></p>
<p>Above command will clone the source code of the application into your computer so that you can make your commits on it, submit patches and do whole of the related stuff.</p>
<p>Now there may be another situation when you have your existing project on your computer that you made from scratch and you want to have CVS system applied to it so that you can easily manage different version of the project you will be releasing in future. In both the cases, to do this you change you current directory to the project directory (gcalctool in case you don’t have any existing project) and then do this :</p>
<p><code class="language-plaintext highlighter-rouge">git init</code></p>
<p>In case you cloned the source code, it is already intialized, so you don’t need to do git init, you can again reintialize it though.</p>
<p>Above command will add a .git hidden directory in your current directory and store whole of the git data into that directory. It records all of the information that is required for your project to be CVS based.</p>
<p>Now let us assume that you change some file in your project directory and now you want to commit changes. If you are practically performing while reading this you can make some changes. To do so, you first add those changes using :</p>
<p><code class="language-plaintext highlighter-rouge">git add .</code></p>
<p>where ‘.’ represents the current directory.</p>
<p>Now that you have staged the content you want to snapshot with the git add command, you run git commit to actually record the snapshot. Git records your name and email address with every commit you make, so the first step is to tell Git what these are.</p>
<p>Now you will commit changes with a commit message which comes in handy later to tell you why you committed this change. You can commit changes to the added content as :</p>
<p><code class="language-plaintext highlighter-rouge">git commit -m 'type your message here'</code></p>
<p>The <code class="language-plaintext highlighter-rouge">-m</code> flag is for message flag, it links the message with your commit and stores it.</p>
<p><strong>NOTE :</strong> If you skip the -m flag, the git will automatically open a vim editor for you to enter the message, so its better to add a -m flag with your command.</p>
<p>To do both of the tasks simultaneosly i.e git add and to commit the changes, you can do so just by one simple command also by :</p>
<p><code class="language-plaintext highlighter-rouge">git commit -a -m 'Enter your message here'</code></p>
<p>Till now everything is fine, but now I will suggest you to create an account on github so that you can manage your projects globally on the internet. After that we can learn about remote commands of git also.</p>
<p>So create an account on github and then create repository out there with the same name as of your project. Now intialize the git in your current directory of the project, do some changes, add to the staging area using git add command and then commit the changes, all of which I told you how to do in above lines.</p>
<p>Now I will be telling some thing about remote commands of git. You will add an alias for the url you want to push your changes to. Eg: My username on github is pranavk, so I will add a remote alias corresponding to any of the repository in my account. Currently ‘pranavk.github.com’ is a repository on my github account. And I have project contained in a directory related to pranavk.github.com in my local computer, I will make the changes to that and push the changes to the github account which are then reflected on the github/pranavk.github.com. The whole procedure goes as :</p>
<p><code class="language-plaintext highlighter-rouge">git remote add origin git@github.com/pranav913/pingmygeek</code></p>
<p>this adds an alias ‘origin’ for the url ‘git@github.com/pranav913/pingmygeek’ mentioned above. Now I don’t need to mention that big url, I will just mention the alias from now.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NOTE
You can see the currently assigned alias by :
`git remote -v`
You can also remove an alias as :
`git remote rm [aliasname]`
</code></pre></div></div>
<p>Now I will push the changes I made to the remote server by pushing the origin to the master branch(i.e the remote server branch)</p>
<p><code class="language-plaintext highlighter-rouge">git push origin master</code></p>
<p>After this all the changes I made locally on my computer will now be reflected on github account globally to all the users with history of commits and commit message I passed on while commiting the changes.</p>
<p>There might be a situation in which you get a error while pushing the changes to master branch, the error about non-fast-forwards. To avoid this error there are two methods, first one is by adding a –force flag to the push command. But it is not recommended, so we will use the second one. In second method, you will just pull the master branch before pushing to the master branch as :</p>
<p><code class="language-plaintext highlighter-rouge">git pull origin master</code></p>
<p>git pull command is same as git fetch except it do an additional task of merging the remote branch with local branch. So means you can divide the above git pull command into two commands of git fetch and git merge.</p>
<p>After this git pull command, now again try to push the origin to master branch, it will be successfully pushed to master without giving any non-fast-forward errors.</p>
<p>##References</p>
<p><a href="http://www.gitref.org">GitReference</a> is a good place to start with if you want to get deep inside git.</p>
Hiding data in Windows2012-04-30T00:00:00+00:00http://pranavk.me/windows/hiding-data-in-windows
<h2 id="hiding-files-folders-and-drives">Hiding Files, Folders and Drives</h2>
<p>You would have thought always to hide some file or folder or even a DRIVE ! from your friends, family
just because you have some personal data in those files/folders/drives. Many of us keep on searching for
softwares that annoy us when the software trial ends and it starts asking for the money for it to keep
working in future for their negligible, non satisfying service that the software provides.</p>
<p>There are different methods that can be employed for hiding. Below are some methods that can be very
useful in case of hiding data.</p>
<h2 id="adding-system-attribute-to-files">Adding system attribute to files</h2>
<p>Microsoft Windows has a feature that makes some of the files which are very sensitive(whose
alteration/deletion can cause harm to the Windows) invisible to us by adding a attribute to those files/folders
called ‘system’ attribute. There is probably no way you can add a system attribute to the file manually
using the Graphics Interface of the windows and without using any third party software(Making files hidden is possible but making files system files is not using GUI). But we have always
the great tool in Windows. Yes ! You are right, here comes CMD again. So start over and open a command prompt
session.
Now go to the respective directory in Windows and type :</p>
<p><code class="language-plaintext highlighter-rouge">attrib filename +s +h</code></p>
<p>(Replace the string ‘filename’ with your file.)</p>
<p>This will add a system and a hidden attribute to the file and makes files invisible.</p>
<p><strong>For Example :</strong>
There is a file named <code class="language-plaintext highlighter-rouge">image.jpg</code> in <code class="language-plaintext highlighter-rouge">D:\test\</code>
Open CMD and type as shown.</p>
<p>
<img src="/images/method1.jpg" />
</p>
<p>The file is hidden now and if you go now to <code class="language-plaintext highlighter-rouge">D:\test\</code>, you will see no file with name image.jpg there.
However there is option to view the system files also which is usually off in most Windows but if we want to see the system files we can go to Folder options » View » Uncheck ‘Hide Protected Operating system files(Recommended)’.</p>
<h4 id="removing-system-attribute-making-file-back-to-normal">Removing system attribute (making file back to normal)</h4>
<p>To remove this system attribute from the file we repeat the same process except this time we change
the command by using a minus in case of plus as :</p>
<p><code class="language-plaintext highlighter-rouge">attrib filename -s -h</code></p>
<p>This will remove the system attribute from the file and file will be visible.</p>
<h2 id="hiding-a-drive-by-tweaking-with-registry">Hiding a Drive by Tweaking with Registry</h2>
<p>Registry Editing is one of the tool that Windows provides which can help users to customize a Windows to some extend. Infact by providing Registry Editing Windows has provided many loopholes for hackers to hack.</p>
<p>Today by using registry Editing we will learn to hide a DRIVE! in the computer.</p>
<p>Go to Run from the Start menu. Type ‘regedit’ in the text field.
When the registry editor loads, navigate to:</p>
<p><code class="language-plaintext highlighter-rouge">HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer</code></p>
<p>Right click on Explorer and select New and then DWORD Value. Name the value as NoDrives and select Decimal as the base.
In the value type the number that corresponds with the drive as shown below:</p>
<p>
<img src="/images/table.jpg" />
</p>
<p>(<strong>E.g.</strong> If you would like to hide drive E, type 16. You may also hide multiple drives by adding the two numbers. E.g. If you would like to hide drives E and G, 16+64=80.)</p>
<p>Now, after you restart your computer, you should not be able to see the drive.</p>
<h2 id="hiding-files-in-an-image-">Hiding files in an IMAGE !</h2>
<p>Well if your files are hidden in an image I guess only very few people would be able to guess that !</p>
<p>So lets see how to do this !</p>
<ul>
<li>Create a folder in D:\, e.g. D:\test. (Let the name of image be ‘image.jpg’)</li></li>
<li>Put all the files you want to hide in there, as well as a JPEG image that you would like to hide the files in.</li>
<li>Select all of the files you want to hide, and create a ZIP or RAR file with them using a program like WinRAR, WinZip, 7Zip, etc.</li>
<li>Now you should have your archive (let the name of archive be archived.rar) next to your files that you want to hide, even though they are in the archived file already, with the JPEG image you would like to hide all of this in.</li>
<li>Go to Start,Run, and type: cmd.</li>
<li>Type cd D:, then type: cd test. (Replace test with the name of your folder.)</li>
<li>Type the following: copy /b image.JPG + archived.rar image.jpg (Replace the name image with the name of your image, and replace archived with the name of your compressed file.)</li>
</ul>
<p>
<img src="/images/def.jpg" />
</p>
<ul>
<li>You should receive a response similar to the following: 1 file(s) copied.</li>
<li>Now you can delete everything except the image. If you double click the image, image will open but now what to do to see hidden contents ?</li>
<li>Right click the image and select open With and select WinRAR or your extractor. You will see the hidden files there.</li>
</ul>
<p>So above were some of the tips I remember from my experience with windows. I hope you enjoyed the article.</p>