EzDevInfo.com

j

semi clone of autojump (<a href="http://github.com/joelthelion/autojump">http://github.com/joelthelion/autojump</a>) in shell/awk

Have J style adverbs, forks etc been emulated via libraries in mainstream functional languages?

Has an emulation of J style of super condensed tacit programming via verbs, adverbs, forks, etc., ever been attempted via libraries for mainstream functional languages?

If so, how successful was the result?

If not, is there a technical issue that makes this impossible, or is it just not worth doing?

I'm particularly interested in constructs like forks that don't appear to correspond directly to basic concepts in functional programming.


Source: (StackOverflow)

Learning J/K/APL [closed]

I know all 3 are related, and I've seen quite a few answers for problems in Project Euler written in J, and a few written K. What I'm wondering is, which would you suggest learning, and where would you suggest going about getting the materials to learn it?


Source: (StackOverflow)

Advertisements

Does the term "monadic" in J have anything to do with its Haskell use?

(Sorry, I'm stupid and uneducated, so this is probably a ridiculous question.)

I just started looking at J, and they use the terms "monadic" and "dyadic" for what seems (to me) to be unary and binary operators. Why is this done, and how does it relate to the other place I've heard the term (Haskell)? My guess is they are unrelated homonyms but I'm not sure.


Source: (StackOverflow)

APL versus A versus J versus K?

The array-language landscape, while fascinating, is confusing to no end. Is there a reason to pick one of J or K or APL or A? None of these options seem to be open-sourced -- are there open sourced versions? I would love the expand my mind, but I remain befuddled.


Source: (StackOverflow)

Would anybody recommend learning J/K/APL? [closed]

I came across J/K/APL a few months ago while working my way through some project euler problems, and was intrigued, to say the least. For every elegant-looking 20 line python solution I produced, there'd be a gobsmacking 20 character J solution that ran in a tenth of the time. I've been keen to learn some basic J, and have made a few attempts at picking up the vocabulary, but have found the learning curve to be quite steep.

To those who are familiar with these languages, would you recommend investing some time to learn one (I'm thinking J in particular)? I would do so more for the purpose of satisfying my curiosity than for career advancement or some such thing.

Some personal circumstances to consider, if you care to:

  • I love mathematics, and use it daily in my work (as a mathematician for a startup) but to be honest I don't really feel limited by the tools that I use (like python + NumPy) so I can't use that excuse.
  • I have no particular desire to work in the finance industry, which seems to be the main port of call for K users at least. Plus I should really learn C# as a next language as it's the primary language where I work. So practically speaking, J almost definitely shouldn't be the next language I learn.
  • I'm reasonably familiar with MATLAB so using an array-based programming language wouldn't constitute a tremendous paradigm shift.

Any advice from those familiar with these languages would be much appreciated.


Source: (StackOverflow)

What does typedef A (*AF)() mean?

My primary programming language, , was recently open-sourced. In order to improve it, I'm studying the source, which is written in C.

But it's been a long (!) time since I've read or written C, and I wasn't even good at it then. And the way this particular codebase is written is ... idiosyncratic (many APL interpreters, J among them, have their source written in high-level "APL style", even when written in a low-level language; very terse, redundancy eschewed, heavy macro use, etc.)

At the moment, I'm trying to understand the fundamental data structures it employs. The most fundamental one is the typedef A ("A" is for "array"):

typedef struct {I k,flag,m,t,c,n,r,s[1];}* A;

which I understand fine. But I'm struggling to wrap my head around what AF is, two lines later:

typedef A (*AF)();

What does this syntax mean? In particular, what does it mean when things are later declared as "type AF"? Is an AF simply a pointer to an A?

My immediate goal is to interpret memory dumps which include things of type V (for "verb"), whose first two members are AFs:

typedef struct {AF f1,f2;A f,g,h;I flag,mr,lr,rr,fdep;C id;} V;

but my overall goal is larger than that, so please elaborate on the syntax employed in the definition of AF.


Source: (StackOverflow)

How to filter a list in J?

I'm currently learning the fascinating J programming language, but one thing I have not been able to figure out is how to filter a list.

Suppose I have the arbitrary list 3 2 2 7 7 2 9 and I want to remove the 2s but leave everything else unchanged, i.e., my result would be 3 7 7 9. How on earth do I do this?


Source: (StackOverflow)

Fiddling with point-free code?

I have been learning the Factor and J languages to experiment with point-free programming. The basic mechanics of the languages seem clear, but getting a feeling for how to approach algorithm design is a challenge.

A particular source of confusion for me is how one should structure code so that it is easy to experiment with different parameters. By this, I mean the sort of thing Mathematica and Matlab are so good at; you set up an algorithm then manipulate the variables and watch what happens.

How do you do this without explicit variables? Maybe I'm thinking about this all wrong. How should I approach this in point-free programming?


Source: (StackOverflow)

Writing a large project using J programming language [closed]

Disclosure

This is a "general" question, perhaps without a specific answer, but it is not intended as a flame war. I would really like some information before embarking on my project.

I have to implement a particular project which would really benefit from the data structures and abstractions provided by J. This is a large project, meant to function as the central component of a large (soft real-time) web application. So performance is very important.

I have been trying to find some information about the usage of J in large commercial or open source projects, but I am unable to find any information on which to base my decision to move forward. I have:

  • Searched Google Trends, but received the following response: "Your terms - j programming language - do not have enough search volume to show graphs."
  • Searched on free(code), and not found a single project using J
  • Searched on Sourceforge, and not found a single project using J
  • Searched on Lambda the Ultimate, and only found the following discussion that obliquely references APL
  • Searched generally on Google and Bing, and failed to find any examples of large scale projects in deployment that use J

Would I be making a mistake in using J for my project? It seems to have everything--especially in terms of data structures, abstraction and concision--that I want. Sure, I could spend time simulating all those properties in F#, or C#, or C++, but J already has them, so...

Can someone please tell me some drawbacks of using J (or any obscure language) for important projects? Is it not sufficiently performant? Does it not have libraries? Anything else I should know?

Thanks in advance for your responses.


Source: (StackOverflow)

How are J/K/APL classified in terms of common paradigms?

I've just started learning J, which is very interesting, but I was wondering what kind of language it is exactly, in relation to common paradigms and classifications. For example, you could say that F# is a strongly typed, mainly functional (it supports OO and procedural programming, but it's considered to be "functional") language which belongs to the ML family. For J, however, I couldn't find much on how to classify it "conventionally", or find anything on Stackoverflow confirming that it's a functional programming language. Wikipedia says that it "is a very terse array programming language", "supports function-level programming", and "is not a Von Neumann programming language", none of which are more helpful.

I have a couple of questions:

  1. What main paradigm (procedural, OO, functional, logical) do J/K/APL fall under? If their paradigm is only "array programming", what paradigm does that fall under or is most similar to?

  2. What well-known programming languages are J/K/APL most similar to? For example, I'd guess that they're like Lisp, since they operate on arrays (lists) and have minimal, no comma syntax.

I'm just trying to categorize these languages in my head according to what I already know. Thank you.


Source: (StackOverflow)

How to count the frequency of a element in APL or J without loops

Assume I have two lists, one is the text t, one is a list of characters c. I want to count how many times each character appears in the text.

This can be done easily with the following APL code.

+⌿t∘.=c

However it is slow. It take the outer product, then sum each column.

It is a O(nm) algorithm where n and m are the size of t and c.

Of course I can write a procedural program in APL that read t character by character and solve this problem in O(n+m) (assume perfect hashing).

Are there ways to do this faster in APL without loops(or conditional)? I also accept solutions in J.

Edit: Practically speaking, I'm doing this where the text is much shorter than the list of characters(the characters are non-ascii). I'm considering where text have length of 20 and character list have length in the thousands.

There is a simple optimization given n is smaller than m.

w  ← (∪t)∩c
f ←  +⌿t∘.=w
r ← (⍴c)⍴0
r[c⍳w] ← f
r

w contains only the characters in t, therefore the table size only depend on t and not c. This algorithm runs in O(n^2+m log m). Where m log m is the time for doing the intersection operation.

However, a sub-quadratic algorithm is still preferred just in case someone gave a huge text file.


Source: (StackOverflow)

Best strategies for reading J code

I've been using J for a few months now, and I find that reading unfamiliar code (e.g. that I didn't write myself) is one of the most challenging aspects of the language, particularly when it's in tacit. After a while, I came up with this strategy:

1) Copy the code segment into a word document

2) Take each operator from (1) and place it on a separate line, so that it reads vertically

3) Replace each operator with its verbal description in the Vocabulary page

4) Do a rough translation from J syntax into English grammar

5) Use the translation to identify conceptually related components and separate them with line breaks

6) Write a description of what each component from (5) is supposed to do, in plain English prose

7) Write a description of what the whole program is supposed to do, based on (6)

8) Write an explanation of why the code from (1) can be said to represent the design concept from (7).

Although I learn a lot from this process, I find it to be rather arduous and time-consuming -- especially if someone designed their program using a concept I never encountered before. So I wonder: do other people in the J community have favorite ways to figure out obscure code? If so, what are the advantages and disadvantages of these methods?

EDIT:

An example of the sort of code I would need to break down is the following:

binconv =: +/@ ((|.@(2^i.@#@])) * ]) @ ((3&#.)^:_1)

I wrote this one myself, so I happen to know that it takes a numerical input, reinterprets it as a ternary array and interprets the result as the representation of a number in base-2 with at most one duplication. (e.g., binconv 5 = (3^1)+2*(3^0) -> 1 2 -> (2^1)+2*(2^0) = 4.) But if I had stumbled upon it without any prior history or documentation, figuring out that this is what it does would be a nontrivial exercise.


Source: (StackOverflow)

How do I write this C expression in J?

How do I write this C expression in J? (where x is input integer, and a is temporary variable)

((a= ~x & (~x >> 1)) ^= a ? 0 : (a ^ (a & (a - 1))) | (a ^ (a & (a - 1))) << 1);

.

Edit:

In a more readable form:

    int a = (~x) & ((~x) >> 1);
    if (a == 0) return 0;
    int b = a ^ (a & (a - 1));
    return b | (b << 1);

Source: (StackOverflow)

Multi-core J -- Parallelisation

Is there a way to get J to use multiple cores ? I thought part of the benefit of APL/J was that the language constructs lent themselves well to parallel solutions.

Looking at my CPU usage (I'm on OSX) there's clearly only a single processor in use.

I've got a heavy-ish function f acting on a list, and I don't see why it couldn't divide the list into 4 pieces, and re-assemble the results ?


Source: (StackOverflow)

J's x-type variables: how are they stored internally?

I'm coding some J bindings in Python (https://gist.github.com/Synthetica9/73def2ec09d6ac491c98). However, I've run across a problem in handling arbitrary-precicion integers: the output doesn't make any sense. It's something different everytime (but in the same general magnitude). The relevant piece of code:

def JTypes(desc, master):
    newdesc = [item.contents.value for item in desc]
    type = newdesc[0]
    if debug: print type
    rank = newdesc[1]
    shape = ct.c_int.from_address(newdesc[2]).value
    adress = newdesc[3]
    #string
    if type == 2:
        charlist = (ct.c_char.from_address(adress+i) for i in range(shape))
        return "".join((i.value for i in charlist))
    #integer
    if type == 4:
        return ct.c_int.from_address(adress).value
    #arb-price int
    if type == 64:
        return ct.c_int.from_address(adress).value

and

class J(object):
    def __init__(self):
        self.JDll = ct.cdll.LoadLibrary(os.path.join(jDir, "j.dll"))
        self.JProc = self.JDll.JInit()

    def __call__(self, code):
        #Exec code, I suppose.
        self.JDll.JDo(self.JProc, "tmp=:"+code)
        return JTypes(self.deepvar("tmp"),self)

Any help would be apreciated.


Source: (StackOverflow)